AI system implementation issues and risk mitigation: living evidence

Living evidence tables provide high level summaries of key studies and evidence on a particular topic, and links to sources. They are reviewed regularly and updated as new evidence and information is published.

This page is part of the Artificial Intelligence series.

This living evidence brief describes some of the major issues with the implementation of artificial intelligence (AI) in healthcare systems, with reference to the major medical ethical principles of beneficence, non-maleficence, autonomy and justice.1, 2

For each section, three aspects are described:

  • the issue
  • potential enablers and solutions – also known as “best bets” – to mitigate risk
  • frameworks to support governance and/or solutions trialled in real world systems.

At present, many of the best bets in this brief, however, are theoretical, advocated by subject matter experts or policy-driven, rather than strategies that have been tested or implemented in actual healthcare systems. The inclusion of an article here does not represent an endorsement by NSW Health but rather is a reflection of the available and dominant literature in the field at the time of writing.

This brief has been developed via a PubMed search of relevant AI literature, targeted searches of the grey literature, and via a weekly screening process of top medical journals since 2023 (e.g. Nature, JAMA, Lancet, BMJ, etc).

Regular checks are conducted for new content and any updates are highlighted.

Summary and leading proposed solutions

Healthcare is at the precipice of huge changes in both the clinical care and administration due to AI developments. The benefits and potential of AI in clinical care are detailed in two other AI living tables.

However, the development of appropriate AI systems and ensuring their smooth and ethical implementation present many challenges, particularly since the science of AI is moving faster than the regulatory developments .3-5

  • From a regulatory perspective, different jurisdictions are taking different approaches. AI solutions may be classified as “medical devices” or as software that carries a higher risk which therefore requires a greater level of oversight.6-8
  • There are well-recognised risks with AI surrounding bias, privacy and security risks, discrimination, lack of transparency, lack of oversight, job displacement and de-personalisation, as well as misapplication of context-dependent algorithms.5, 9
  • Patients' rights to high quality clinical care, as well as explanations of algorithms' output and data protection compliance are pertinent issues in AI implementation.5, 10-13

Experts and professional organisations advocate that successful, ethical and sustainable adoption of “responsible AI” will be underpinned by:9, 14-29

  • strong governance and minimum standards
  • a risk management approach to the entire AI development process
  • explicit consideration of bias at all stages of the AI development and implementation pipeline, from problem selection and data collection to post-deployment
  • engagement of all stakeholders across AI design and implementation phases
  • high-quality and diverse datasets, with external validation of data and models
  • transparency around the use of technology and methods used to develop AI models. This includes adoption of explainable AI models, rather than black box approaches.
  • continuous monitoring and improvement processes
  • development, implementation and evaluation of models which support clinical practice and create benefit, rather than a drive purely for productivity and efficiency30
  • a realisation that AI is better suited to supporting, rather than replacing clinicians, and should be viewed as a tool rather than an autonomous entity. Many medical decisions require ethical judgements, rapport-building, interdisciplinary collaborations and empathy to engage in shared decision making.16, 31, 32

Lack of legal and  regulatory frameworks

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to develop appropriate governance around AI.

  • Develop robust regulatory frameworks for AI in healthcare ensures the technology is deployed safely, ethically, and effectively.30
  • Develop health specific guidance for AI, however, this can also be based on lessons learnt from other industries with similar risk profiles (e.g. aeronautics) and data sensitivity (e.g. finance).9
  • Develop thorough definitions of AI in healthcare, to allow for robust legislation and regulation.34
  • Develop best practice industry standards, auditing and public reporting requirements for AI developers and users to comply with.19, 35
  • Ensure there are associated deterrent penalties for data misuse.29
  • Undertake a review of existing legislation to identify where greater clarity is required with respect to new legislation, or legislative amendment.35
  • Co-operate across jurisdictions and countries, allowing joint action that reduces the cost of development of AI solutions, progressing regulatory effectiveness and efficiency, and improving safety of AI solutions in cases of poor outcomes or unintended consequences.9
  • Embed the leadership of Aboriginal and Torres Strait Islander experts in government responses to AI on an ongoing basis, in staffing and advisory functions.35

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support the development of AI governance, or examples of how other jurisdictions are moving forward in regulatory and legislative changes.

Global frameworks

  • The Organisation for Economic Co-operation and Development (OECD) has proposed AI values-based principles: inclusive growth, sustainable development, and well-being;  human-centred values and fairness; transparency and explainability; robustness, security, and safety; and, accountability.36 They discuss the implementation of these in different jurisdictions in a number of  reports.37, 38
  • The OECD has proposed and continues to update its definition of AI. It argues that if all governments can agree on the same definition, it allows for interoperability across jurisdictions.34
  • The World Health Organization (WHO) has proposed a framework to promote good governance of AI in health and promotion of ethical principles.21, 39, 40

Australia

  • The Australian Government published Australia’s AI Action Plan in 2021. One of its actions specific to healthcare was to fund AI-focused projects under the Medical Research Future Fund.41
  • Australian authors have proposed SALIENT, an end‐to‐end clinical artificial intelligence implementation framework. At the time of publication SALIENT was the only framework with full coverage of all reporting guidelines and provided a starting place for establishing that AI is tested and suitable for implementation in the Australian context.42
  • eHealth NSW is working with the NSW Ministry of Health to adapt the NSW Artificial Intelligence Assurance Framework for NSW Health. Successful and safe adoption of AI within NSW Health requires effective leadership and governance to ensure a coordinated approach to the longer-term development of clinical AI and research activities.43

Different countries are taking different approaches to AI regulation, with varying levels of precaution.44

  • The NHS in the United Kingdom has developed an AI centre of excellence.45
  • The NHS in England created an Artificial Intelligence Laboratory (NHS AI Lab) to bring together government, health care providers, academics and technology companies. It is an environment for collaboration to address barriers and deploy AI systems in health care.45, 46
  • The AI and Digital Regulations Service provides guidance for NHS, social care adopters and digital health innovators. It is a multi-agency collaboration to  provide comprehensive guidance at each stage of the adoption pathway.47
  • The European Union Agency for Cybersecurity has proposed a platform to share experiences, challenges and opportunities to support policy makers.48
  • The European Union is in the process of developing an “AI Act”.49 It recognises that it may not be possible for each person to give explicit consent to every action that will be performed on their data in the future. More broad consent processes may allow for great data sharing practices.49, 50
  • The U.S. Food and Drug Administration (FDA) has created a Digital Health Advisory Committee to help the agency explore the complex, scientific and technical issues related to digital health technologies, such as AI/machine learning, augmented reality, virtual reality, digital therapeutics, wearables, remote patient monitoring and software.51
  • In the United States, the National Academy of Medicine is running The Artificial Intelligence Code of Conduct project, which aims to provide a guiding framework to ensure that AI algorithms and their application in health, healthcare, and biomedical science perform accurately, safely, reliably and ethically.
  • The Canadian Government tabled an Artificial Intelligence and Data Act (AIDA) in 2022 which was described as a first step towards a new regulatory system encouraging the responsible adoption of AI.52
  • The Canadian Institute for Advanced Research (CIFAR)’s Building a Learning Health System for Canadians report highlights the need to develop AI infrastructure, accelerate the development of safe, high performance AI applications, and ensure that relevant policies, investments, partnerships, and regulatory frameworks are in place.53
  • Singapore’s Ministry of Health has developed AI guidelines which cover recommendations for development (design, build, test) and implementation (use, monitor and review).17
  • Specifically for radiology AI, governance and implementation frameworks have been proposed which cover:20, 54
    • regulation, legislation and ethics leadership and staff management
    • stakeholder alignment
    • pipeline integration
    • training of staff
    • validation and evaluation
    • AI auditing and quality assurance
    • AI research and innovation.

Underdeveloped and biased models

Issues

There are a number of issues related to biased AI models.

  • AI algorithms incorporate the values, choices, beliefs, and norms of their developers and from research design.32, 55
  • AI models will ultimately reflect biases in data and practices which already exist.30, 56, 57
  • Most AI models to date are built on small, poor quality and/or unrepresentative data sets. They are often made up of retrospective, single-institution data that are unpublished and considered proprietary. These cannot easily generalise to other hospitals, countries or ethnicities23, 30, 57-61
  • Even a well-designed AI model can subsequently be used in a context for which it was not developed55
  • Even experienced clinicians can struggle to consistently distinguish between accurate and inaccurate AI predictions and can be misled by inaccurate ones.62

Implications of clinical adoption

  • adopting models that reflect inequities in the practice of medicine will only magnify existing inequities30, 63
  • these biases have significant implications for accurate diagnosis, under or overestimation of risks32
  • this may lead to significant associated medicolegal and malpractice consequences64
  • many AI models are ‘black boxes’ where only inputs and outputs are known, meaning it isn’t clear how they develop their final results; and they appear less trustworthy and it can be more difficult to improve them.30, 65, 66

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to ensure AI models are designed in ways that minimise bias.

Education is crucial for fostering a shared understanding and promoting fairness in healthcare.14

  • educate clinicians and patients on the biases inherent in AI
  • encourage open discussions on the implications of AI in healthcare decision-making.

Promote data cleaning, organising, anonymisation and sharing practices.

  • This could include:67
    • full digitalisation of patient medical information
    • establishment of data trusts
    • engaging with federated learning, where instead of needing to move all training data to a central location, models are trained locally on local datasets to ensure data security68
  • This allows:10, 54
    • sufficient and more widely representative data to be available for designing stronger and less biased models
    • contributions and learning across institutional boundaries.

Governance, legislation and stewardship specifically to support equity and safety.21

  • Develop strong policies around patient informed consent for data sharing and use.14, 19, 29, 32, 69
  • Develop minimum requirements for:14, 19
    • deidentification of patient data
    • disclosure of methodologies
    • disclosure of data sources
    • model accuracy and performance metrics.
  • Establish:
    • incentives for AI developers to take measures to minimise bias69
    • ethical frameworks to identify and minimise the impact of biased models, as well as to guide design choices32
    • oversight committees and processes to ensure minimum requirements are met
    • data governance panels made up of a representative group of patients, clinical experts, and experts in AI, ethics and law. They monitor and review datasets and algorithms used for training AI to ensure that the data is representative and that the algorithms used are impartial.32
    • effective national safety monitoring systems so that cases of patient risk and harm related to AI use are rapidly detected and communicated to all relevant parties.19

Liability and accountability

  • Develop clear guidelines for responsibility and accountability in healthcare AI.14, 29, 69
    • This includes determining the roles and responsibilities of various stakeholders, such as physicians, AI developers, and healthcare institutions, in cases where misdiagnoses or other patient harm occur.
    • Some authors suggest clinicians should be ultimately responsible for verifying AI-generated diagnoses and integrating them into the clinical decision-making process.14 However, others have suggested that this places an unrealistic burden on clinicians and instead advocate that AI systems should be designed to support existing ways of working.70
  • Update legislation for handling AI-related medical disputes.32
    • Some authors suggest transferring existing common law principles of negligence and malpractice to AI agents.32

Transparency in methods

  • Promotion of complete, accurate, and transparent reporting of studies that develop prediction models or evaluate their performance.71
  • Develop the capability to provide meaningful and personalised explanations about the results generated by algorithms.32
  • Demonstrate the reliability of the AI models.32
  • Openly report on what or whose data might be missing from an AI model’s development to date.29
  • Choose and/or require interpretable and explainable AI models over and above ‘black-box’ models .10, 21-23, 66, 68, 69
    • Explainable AI involves understanding how a specific algorithm works and knowing who is responsible for its implementation.32, 66
    • It involves algorithm source codes, data sets and training conditions being transparently reported.10, 21, 72, 73
    • However, some have cautioned that explainable AI models have their own challenges:68
      • AI models are necessarily complex and non-linear due to the large amounts of data and variables they handle, which make them difficult to ‘explain’ or for humans to understand.74
      • Explainable AI can be (necessarily) simpler in design, make more approximations and in turn can produce models which are less fair for minority populations75 or even explain themselves incorrectly in an attempt to oversimplify complex models.76

Best practices in data sources and methods

  • Ensure the use of datasets which are diverse and representative of heterogeneous populations and disease presentations, during AI development and training.14, 61
  • Train models on data that are representative of the population they serve, encapsulating characteristics such as age, ethnicity, gender, sexual orientation, and socioeconomic background.57
  • Integrate patient data from various sources to improve model bias and explainability of clinical decisions.32, 63
  • Incorporate the use of different algorithms to account for disparities in dataset sample sizes.63
  • Consider datasets which oversample minority populations. This can ensure their data is accounted for when models are designed.63 However, this requires judicious choices around how minorities are selected, conceptualised and labelled.77
  • Complementing real datasets with synthetic data can improve the accuracy of clinical diagnosis within underrepresented groups.78
  • Require AI models which have been developed via cross-validation and experimental designs. These will demonstrate higher external validity and reproducibility (e.g. high accuracy when the AI model encounters a novel dataset).10, 31, 79
  • Ensure that external validation of datasets are15, 61
    • representative of the population and setting in which the AI system is intended to be deployed
    • independent of the dataset used for developing the AI model during training and testing.
  • Consider modification of AI algorithms by learning from local data, to ensure they are a good fit with local contexts.32, 73
  • Consider local and contextual variables when building predictive models in order to minimise the impact of algorithmic bias on clinical decisions.32

AI model audits and ongoing monitoring

  • Implement regular, multi-disciplinary audit and AI validation of dependability, performance, safety and ethical compliance. This can identify potential biases and ensure that AI systems remain fair, accurate, and effective in diverse healthcare settings.14, 32, 69, 80
  • Examine disparities between less and more socially advantaged populations across model performance metrics then brainstorm solutions to address the disparities before implementation of models in clinical practice.18
  • Monitor a model’s validity over time via feedback systems. These can suggest when model re-training might be required.10, 24

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support the development of AI models that minimise bias.

Data sharing repositories and guidelines

  • The MAIDA initiative has established a framework for global medical-imaging data sharing, to address the shortage of public health data and enable rigorous evaluation of AI models across all populations. They discuss the challenges and lessons learnt in establishing the initiative.81
  • The NSW Health Data Lake can be used in the future to integrate data, AI and machine learning methods into the Data Engineering processes currently used for complex analysis of data. Increased automation will drastically reduce the time involved around cleaning data.82
  • The STANDING Together (standards for data diversity, inclusivity and generalisability) initiative is an international, consensus-based initiative that aims to develop recommendations for the composition (who is represented) and reporting (how they are represented) of datasets that underpin medical AI systems.83
  • Researchers in Canada have reported on a model which allows multiple hospitals and jurisdictions to share data for AI model development without compromising on privacy and data security. Using the platform allowed the project to develop more robust AI models than would otherwise have been possible. 84
  • The Japan Medical Image Database (J-MID), established in 2018, contains CT and magnetic resonance imaging (MRI) scans and diagnostic reports uploaded from major university hospitals in Japan. Since moving to cloud-based infrastructure in 2023, J-MID now contains approximately 500 million images. Japan’s national health insurance system provides CT and MRI scans for all citizens, which allows for the collection of unbiased image data regardless of age or socioeconomic status.85
  • Collaborative AI testing labs

  • Public-private partnerships can support AI assurance labs. These can serve as a shared resource for the industry to validate AI models and accelerate development and successful market adoption.86
  • The University of Melbourne’s AI assurance Lab, for example, validates AI technologies with respect to quality, safety, privacy, and reliability.87
  • Developing and sharing foundational AI models

  • Foundational AI models are trained on broad data and can be applied across a wide range of use cases.88
  • Foundational models such as MONET enable AI transparency across the entire system development pipeline.89
  • AI model audit and monitoring

  • ‘Post-deployment monitoring’ is one of the 10 guiding principles identified by the joint bodies of U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency, in the development of Good Machine Learning Practice (GMLP).90
  • Singapore’s Ministry of Health AI guidelines refer to this process as ‘ground-truthing’.17

Data security and intellectual property

Issues

  • The collection of personal health data risks exposing patients to:32
    • privacy invasion
    • repurposing of data for uses for which consent  was not given91
    • fraud
    • algorithmic bias
    • information leakage
    • identity theft.
  • The way that AI models learn exposes them to novel risks. They can be attacked and controlled by a bad actor with malicious intent, an ’AI attack’.
  • A machine learning model can be attacked in three different ways. It can be:92
    • misled into making a wrong prediction
    • altered through data e.g., to make it biased, inaccurate or even malicious
    • replicated or stolen e.g., IP theft through continuous querying of the model

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to promote safe and secure use of AI models in healthcare and ensure data protection.

  • Prioritise privacy and data protection during all design stages and deployment of AI systems.15, 93
  • Protect the privacy of individuals whose data is used to train AI systems.15, 19
  • Ensure AI systems meet an organisation’s data residency or sovereignty obligations, with respect to where (globally or in the cloud) data is stored.94
  • Ensure systems can log and monitor AI model input, output and high frequency, repetitive prompts.94
  • Enact broader rulemaking authority for patient data protection so it can act quickly as new privacy and security threats emerge.94
  • Technological solutions

  • Modern technical solutions need to be employed by developers.32
  • There are four pillars of protection: training data privacy, input privacy, output privacy, and model privacy.95
  • Privacy protection mechanisms exist at different points along the pipeline and can include:95
    • cryptographic techniques (e.g. homomorphic encryption, garbled circuits)
    • non-cryptographic techniques (e.g. differential privacy)
    • hybrid techniques (e.g. federated learning)
    • decentralised systems (e.g. blockchain).

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support the development of secure AI.

  • The Australian Cyber Security Centre have published guidance together with other peak bodies to highlight best practices in AI adoption and security.94
  • The Office of the Victorian Information Officer has published Artificial Intelligence – Understanding Privacy Obligations.96
  • In the US, the National Institute of Standards and Technology have published an Artificial Intelligence Risk Management Framework.97
  • The Office of the Privacy Commissioner of Canada have published Principles for responsible, trustworthy and privacy-protective generative AI technologies.98

Poor uptake – patients

Issues

Patients can mistrust AI due to:

  • data privacy concerns91, 99
    • many patients are unwilling to share their health data, even for developing algorithms that might improve quality of care32
  • concerns around accuracy or adequate performance99, 100
  • lack of model and performance explanations.99

AI literacy

  • Large language models can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.93

Patients may be less likely to adopt AI systems or apps in cases of:99

  • poor usability or user interface
  • wanting to maintain human patient-clinician relationships
  • lack of perceived empathy in AI and apps
  • inappropriate or over-detailed information, or a lack of actionable recommendations.

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to ensure that patients and their needs are included in the AI development process and patient uptake of new AI technologies is more likely.

Stakeholder (patient and advocacy groups) buy-in and co-design101

  • Promote and fund AI clinical trials which prioritise patient-relevant outcomes to fully understand AI's true effects and limitations in health care.102
  • Gain a thorough understanding of the unique challenges and concerns faced by various patient populations.14
  • Promote the development of more equitable and effective AI solutions tailored to patient needs.14, 103

AI system and app development and design

  • Embed ethical principles into AI design to promote a trusting relationship between patients and AI.32
  • Adopt a patient-centred approach in designing medical AI to promote informed choices aligned with patient values and respect patient autonomy.32
  • Seek to maximise explainability.99

Maximising user experience

  • Involve stakeholders in interface design.14, 104
  • Design AI applications which have options to personalise information such as the explanation of diagnoses, recommendations or patient education.100
  • Design applications that interconnect with other information sources such as the electronic health record, calendars, and smart devices.100

Humanness and AI

  • Design AI systems and chatbots to display humanness (e.g. recognition, personification and empathy). 30, 32, 99 This can
    • provide meaningful and ethical care
    • be more likely to create connection, trust and appeal.
  • Create (real or perceived) anonymity in order to promote patients being comfortable to discuss sensitive topics.100

Patient rights and psychological safety

  • Be transparent that AI is in use17 and how data will be anonymised, stored and secured.100
  • Design informed consent processes to be both thorough and clear.32, 105
  • Consider data ownership and ethical considerations carefully before sharing data, particularly if a profit corporation is involved.105
  • Consider data sovereignty and allow for flexibility in how patients (particularly subgroups of marginalised patients) have ownership over their own data where there are privacy and ethical concerns.106
  • Develop guidelines and resources for patients and carers to maximise AI literacy, so that stakeholders can interpret AI data and have increased trust in AI.19, 107
  • Encourage people to become more proactive in sharing and disseminating data about themselves via (secure) personal health repositories.32

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of AI models or apps that have been co-designed, as well as initiatives to support others to co-design healthcare AI.

  • A number of case examples exist where a co-design process with clinicians and AI developers has been used to produce or plan:
    • a patient-driven project to use AI to analyse the effect of lithium medication on kidney function108
    • a chatbot for mental health support in young people109
    • a chatbot for patients living with COPD110
    • an AI-enhanced social robot to be used as a distraction tool for children in the emergency department111
    • co-creation of a digital learning health system called the BrainHealth Databank.112
  • Researchers from the UK have shared open access materials on Patient and public involvement to build trust in artificial intelligence: a framework, tools and case studies.108, 113

Poor uptake – clinical staff

Issues

  • Poor uptake of novel systems can occur with staff for a number of reasons.
  • Useful AI solutions can still fail at the implementation stage without proper planning for system and end-user needs.114
  • Acceptability of AI for healthcare professionals is underpinned by:33, 100, 115-119
    • user factors: trust in AI (and its accuracy and implications), system understanding, AI literacy
    • system usage factors: added value, time savings, burden, interface and user friendliness, workflow integration and interoperability of systems
    • socio-organisational-cultural factors: social influence, organisational readiness, ethical aspects, perceived threat to professional identity.

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to promote trust in AI in healthcare staff and ensure their needs are incorporated into product designs.

Promoting the development of forms of AI which support clinicians existing workflows and decisions, rather than autonomous AI systems which then require clinician sign off.70

Culture and systems which:120, 121

  • evaluate new technologies in clinical contexts122
  • support clinicians across their entire workflow
  • promote learning through experience
  • allow for the accommodation of clinicians’ autonomy
  • encourage the idea that change is vital and positive, and that progress may involve some experimentation and failure along the way.

Interoperability of systems and modernisation of organisational data infrastructure and procedures, such as21, 123

  • Develop integrated care systems via co-design, considering staff needs and relational considerations and incorporating change management.124
  • Establish remote access methods.
  • Invest in distributed data platforms and cloud computing infrastructure.
  • Improve data storage and increasing computational power for advanced analytics.
  • Make data available seamlessly across integrated systems to enable faster, easier and more accurate patient data analysis.116

Sustainable and context-relevant AI development

  • Develop strong local leadership who are responsible for adapting AI to the local contexts.32
  • Ensure inclusive local leadership that includes the perspectives of all stakeholders.19, 32
  • Ensure health systems and vendors meet the needs of the end-users of clinical decision support tools.18, 19
  • Coordinate multidisciplinary teams and align projects with key institutional values, for more seamless clinical implementation.54
  • Incorporate the perspectives and feedback of clinicians in order to maximise their knowledge of workflows. A study of nurses using AI showed that nurses can provide real-world insight and solutions for implementation issues.125

Transparency:

  • around the role of AI in supporting rather than replacing clinical care30, 126
  • around a model’s accuracy and how a recommendation is derived (who developed the system, the system reasoning and reliability)100
  • so that users of a system understand how decisions are made, then they are more likely to adopt it.16
  • Transparency might not be enough to create trust, however:
    • staff are also likely to want high accuracy and reliability in AI models in order to implement them with patients18
    • focus AI design on technical performance of the technology, the infrastructure and processes that ensure technical performance and safety (rather than attempting to explicitly build in trust features).18

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of AI models or apps that have been co-designed, as well as initiatives to support others to co-design healthcare AI.

  • A number of case examples exist where a co-design process with clinicians and AI developers has been used to produce or plan:
    • an AI model to classify skin lesions127
    • co-creation of a digital learning health system called the Brain Health Databank112
    • an AI-enhanced social robot to be used as a distraction tool for children in the emergency department.111

Workforce changes and challenges

Issues

  • AI outputs require clinicians to interpret them, which requires appropriate levels of AI literacy.93, 128
  • The impact that AI will have on health workforce is not well understood, particularly in terms of knowledge and skills gaps, and curriculum requirements.107
  • Some staff have fears and mistrust around medicolegal implications and job losses.129, 130
  • The increased use of AI in the workplace may result in increased collection and analysis of data on workers. Data may or may not be personal, and could include information such as worker movements and digital activities or even biometric data. Workers may have concerns around data security and use of such data.131

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to ensure workforce changes are evidence based and well managed.

Accurate projections of workforce requirements

  • Develop a thorough understanding of  knowledge and skills gaps and current capability building efforts.19
  • Investigate the impact on AI development and implementation on different staff employed by health services, including
    • data scientists
    • end-user clinicians
    • end-user administrative staff
    • positions which might increase in demand due to AI’s increased capability e.g., geneticists.

Workforce training is an important step in facilitating AI adoption.21, 107, 132 It can

  • increase digital and AI literacy in the workforce
  • make workflows more efficient
  • ensure clinicians can identify where AI models have deviated from intended use and may behave in a way that increases the risk of liability22
  • reduce likelihood of incorrect applications of AI models.

AI and digital literacy

  • Prioritise training and retention of local expertise.32
  • Develop high levels of digital and genomic literacy in clinical staff133
    • think about ways to nurture digital literacy in a way that leaves no one behind122
    • conduct in-person live training and ensure there is access to ongoing support.122
  • Prepare future clinical staff to not only use AI in care delivery and research but also critically evaluate its applicability and limitations.18, 134
  • Ensure that workforce training in AI and digital literacy covers a range of increasing competencies including:19, 122, 135-137
    • core knowledge around computer science and information technology
    • skills in the application of AI technology, pedagogy, ethics, healthcare policy, and clinical practice
    • specialist skills and capabilities where relevant for clinical implementation, to deploy and maintain technologies, including the potential for personalised educational elements.

Culture and stakeholder buy-in54, 130, 132, 133, 138, 139

  • View the evolution of healthcare roles as something that requires active planning and shaping, rather than a passive act.
  • Create a shared vision for how professions and occupations can develop with greater use of technology.
  • Develop new team structures that tightly integrate data scientists and engineers with frontline clinical staff to foster cross-disciplinary communication and ensure that AI tools are fit for implementation in healthcare environments.30
  • Adopt a change management approach which incorporates structured AI adoption programs and is grounded in implementation science. This is particularly relevant in cases where it will replace administrative roles.

Transparency

  • whenever AI is use in the workplace, wherever feasible131

Worker data131

  • Restricting the collection, use, inference, and disclosure of staff's personal information.
  • Requirements to safeguard staff personal information and appropriate handling of data.

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support or inform upcoming changes in workforce requirements due to the increasing role of AI in healthcare.

  • NSW Health has published a report on the impacts of technology on the health workforce.140
  • In the UK, the Topol Review: Preparing the healthcare workforce to deliver the digital future outlines recommendations for the NHS to integrated digital innovations including AI into workforce planning. It highlights that healthcare staff will need high levels of digital and genomics literacy.133
  • An EIT Health and McKinsey & Company report estimates what proportion of staff hours could be automated by AI and provides recommendations on investing in new talent, creating new roles and change management.132
  • A report from the American Hospital Association’s Center for Health Innovation provides useful frameworks and tools for hospital and health system leaders to successfully integrate AI technologies into their workforce and workflows. It outlines new potential roles, desirable digital skills and discusses overcoming workforce challenges.141
  • The report Building Canada’s Future AI Workforce outlines the support needed for Canada’s digital workforce to acquire AI skills through various training pathways: broad upskilling initiatives to target widely needed digital skills and strategic cross-training programs to address acute needs like those in the field of AI, across health and other sectors.142

References

  1. Möllmann NRJ, Mirbabaie M, Stieglitz S. Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Informatics Journal. 2021 2021/10/01;27(4):14604582211052391. DOI: 10.1177/14604582211052391
  2. Gillon R. Medical ethics: four principles plus attention to scope. BMJ. 1994;309(6948):184. DOI: 10.1136/bmj.309.6948.184
  3. Gerke S, Babic B, Evgeniou T, et al. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. npj Digital Medicine. 2020 2020/04/07;3(1):53. DOI: 10.1038/s41746-020-0262-2
  4. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics. 2021 2021/09/15;22(1):122. DOI: 10.1186/s12910-021-00687-3
  5. Wubineh BZ, Deriba FG, Woldeyohannis MM. Exploring the opportunities and challenges of implementing artificial intelligence in healthcare: A systematic literature review. Urologic Oncology: Seminars and Original Investigations. 2024 2024/03/01/;42(3):48-56. DOI: https://doi.org/10.1016/j.urolonc.2023.11.019
  6. Medicine and Healthcare products Regulatory Agency (MHRA). Software and Artificial Intelligence (AI) as a Medical Device. London: MHRA; 2023 [cited 22 Feb 2024]. Available from: https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device
  7. US Food and Drug Administration (FDA). Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan. Washington, DC: FDA; 2021 [cited 22 Feb 2024]. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
  8. Fraser AG, Biasin E, Bijnens B, et al. Artificial intelligence in medical device software and high-risk medical devices – a review of definitions, expert recommendations and regulatory initiatives. Expert Review of Medical Devices. 2023 2023/06/03;20(6):467-91. DOI: 10.1080/17434440.2023.2184685
  9. Organisation for Economic Co-operation and Development (OECD). Collective action for responsible AI in health. Paris: OECD; 2024 [cited 14 Feb 2024]. Available from: https://www.oecd.org/publications/collective-action-for-responsible-ai-in-health-f2050177-en.htm
  10. Cobanaj M, Corti C, Dee EC, et al. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer. 2024 Feb;198:113504. DOI: 10.1016/j.ejca.2023.113504
  11. Sutherland E. Artificial intelligence in health: big opportunities, big risks. Paris: OECD; 2023 [cited 12 Feb 2024]. Available from: https://oecd.ai/en/wonk/artificial-intelligence-in-health-big-opportunities-big-risks
  12. Gilbert S, Harvey H, Melvin T, et al. Large language model AI chatbots require approval as medical devices. Nature Medicine. 2023 2023/10/01;29(10):2396-8. DOI: 10.1038/s41591-023-02412-6
  13. Komesaroff PA, Felman ER. How to make sense of the ethical issues raised by artificial intelligence in medicine. Internal Medicine Journal. 2023 2023/08/01;53(8):1304-5. DOI: 10.1111/imj.16180
  14. Ueda D, Kakinuma T, Fujita S, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Japanese Journal of Radiology. 2024 2024/01/01;42(1):3-15. DOI: 10.1007/s11604-023-01474-3
  15. World Health O. Regulatory considerations on artificial intelligence for health. Geneva: World Health Organization; 2023 2023.
  16. Cresswell K, Rigby M, Magrabi F, et al. The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision. Health Policy. 2023 2023/10/01/;136:104889. DOI: 10.1016/j.healthpol.2023.104889
  17. Ministry of Health Singapore. Artificial intelligence in healthcare guidelines. Singapore Government; 2023 [cited 14 Feb 2024]. Available from: https://www.moh.gov.sg/licensing-and-regulation/artificial-intelligence-in-healthcare
  18. Rojas JC, Teran M, Umscheid CA. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated. Crit Care Clin. 2023 Oct;39(4):769-82. DOI: 10.1016/j.ccc.2023.02.004
  19. Australian Alliance for Artificial Intelligence in Healthcare (AAAiH). A Roadmap for AI in Healthcare for Australia. Sydney: AAAiH; 2021 [cited 10 Jan 2024]. Available from: https://aihealthalliance.org/2021/12/01/a-roadmap-for-ai-in-healthcare-for-australia/
  20. Stogiannos N, Malik R, Kumar A, et al. Black box no more: a scoping review of AI governance frameworks to guide procurement and adoption of AI in medical imaging and radiotherapy in the UK. The British Journal of Radiology. 2023 2023/12/01;96(1152):20221157. DOI: 10.1259/bjr.20221157
  21. Fisher S, Rosella LC. Priorities for successful use of artificial intelligence by public health organizations: a literature review. BMC Public Health. 2022 2022/11/22;22(1):2146. DOI: 10.1186/s12889-022-14422-z
  22. Hedderich DM, Weisstanner C, Van Cauter S, et al. Artificial intelligence tools in clinical neuroradiology: essential medico-legal aspects. Neuroradiology. 2023 2023/07/01;65(7):1091-9. DOI: 10.1007/s00234-023-03152-7
  23. Dorr DA, Adams L, Embí P. Harnessing the Promise of Artificial Intelligence Responsibly. JAMA. 2023;329(16):1347-8. DOI: 10.1001/jama.2023.2771
  24. Widner K, Virmani S, Krause J, et al. Lessons learned from translating AI from development to deployment in healthcare. Nature Medicine. 2023 2023/06/01;29(6):1304-6. DOI: 10.1038/s41591-023-02293-9
  25. Wang Y, Li N, Chen L, et al. Guidelines, Consensus Statements, and Standards for the Use of Artificial Intelligence in Medicine: Systematic Review. Journal of Medical Internet Research. 2023;25(1). DOI: 10.2196/46089
  26. Lammons W, Silkens M, Hunter J, et al. Centering Public Perceptions on Translating AI Into Clinical Practice: Patient and Public Involvement and Engagement Consultation Focus Group Study. J Med Internet Res. 2023 Sep 26;25:e49303. DOI: 10.2196/49303
  27. Chan A. The EU AI Act: Adoption Through a Risk Management Framework. Schaumburg, IL: ISACA; 2023 [cited 6 Mar 2024]. Available from: https://www.isaca.org/resources/news-and-trends/industry-news/2023/the-eu-ai-act-adoption-through-a-risk-management-framework
  28. Baquero J, Burkhardt R, Govindarajan A, et al. Derisking AI by design: How to build risk management into AI development. New York: McKinsey & Company; 2000 [cited 6 Mar 2024]. Available from: https://www.mckinsey.com/capabilities/quantumblack/our-insights/derisking-ai-by-design-how-to-build-risk-management-into-ai-development
  29. Goldberg CB, Adams L, Blumenthal D, et al. To do no harm — and the most good — with AI in health care. Nature Medicine. 2024 2024/03/01;30(3):623-7. DOI: 10.1038/s41591-024-02853-7
  30. Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artificial Intelligence in Medicine. 2024 2024/05/01/;151:102861. DOI: https://doi.org/10.1016/j.artmed.2024.102861
  31. How to support the transition to AI-powered healthcare. Nature Medicine. 2024 2024/03/01;30(3):609-10. DOI: 10.1038/s41591-024-02897-9
  32. Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Social Science & Medicine. 2022 2022/03/01/;296:114782. DOI: 10.1016/j.socscimed.2022.114782
  33. Giddings R, Joseph A, Callender T, et al. Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review. The Lancet Digital Health. 2024;6(2):e131-e44. DOI: 10.1016/S2589-7500(23)00241-8
  34. OECD Policy Observatory. Updates to the OECD’s definition of an AI system explained. Paris: OECD; 2023 [cited 14 Feb 2024]. Available from: https://oecd.ai/en/wonk/ai-system-definition-update
  35. James Martin Institute for Public Policy (JMI). Leadership for Responsible AI: A Constructive Agenda for NSW. Sydney: JMI; 2023 [cited 06 Mar 2024]. Available from: https://jmi.org.au/wp-content/uploads/2023/12/FINAL-REVIEWED-Leadership-for-Responsible-AI-v3.pdf
  36. Organisation for Economic Co-operation and Development (OECD). OECD AI Principles. Paris: OECD; 2019 [cited 19 Feb 2024]. Available from: https://oecd.ai/en/ai-principles
  37. Organisation for Economic Co-operation and Development (OECD). The state of implementation of the OECD AI Principles four years on. 2023. DOI: 10.1787/835641c9-en
  38. Castonguay A, Wagner G, Motulsky A, et al. AI maturity in health care: An overview of 10 OECD countries. Health Policy. 2024 2024/02/01/;140:104938. DOI: https://doi.org/10.1016/j.healthpol.2023.104938
  39. World Health Organization (WHO). Ethics and governance of artificial intelligence for health. Geneva: WHO; 2021 [cited 17 Jul 2023]. Available from: https://www.who.int/publications/i/item/9789240029200
  40. World Health Organization (WHO). Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. Geneva: WHO; 2024 [cited 13 Mar 2024]. Available from: https://www.who.int/publications/i/item/9789240084759
  41. Department of Industry Science and Resources. Australia’s Artificial Intelligence Action Plan. Canberra: Australian Government; 2021 [cited 21 Feb 2024]. Available from: https://www.industry.gov.au/publications/australias-artificial-intelligence-action-plan
  42. van der Vegt A, Campbell V, Zuccon G. Why clinical artificial intelligence is (almost) non-existent in Australian hospitals and how to fix it. Med J Aust. 2023 Dec 26. DOI: 10.5694/mja2.52195
  43. Digital NSW. Artificial intelligence assurance framework. Sydney: NSW Government; 2022 [cited 23 Oct 2023]. Available from: https://www.digital.nsw.gov.au/policy/artificial-intelligence/nsw-artificial-intelligence-assurance-framework
  44. Hutson M. Rules to keep AI in check: nations carve different paths for tech regulation. London: Nature; 2023 [cited 19 Feb 2024]. Available from: https://www.nature.com/articles/d41586-023-02491-y
  45. AI Centre for Value Based Healthcare. The AI centre for value based healthcare is pioneering AI technology for the NHS. London: AI centre for value based healthcare; 2023 [cited 17 Jul 2023]. Available from: https://www.aicentre.co.uk/
  46. The NHS AI Lab. The NHS AI Lab. London: NHS; 2023 [cited 17 Jul 2023]. Available from: https://transform.england.nhs.uk/ai-lab/
  47. National Institute for Health and Care Excellence (NICE). One-stop-shop for AI and digital regulations for health and social care launched. London: NICE; 2023 [cited 19 Feb 2024]. Available from: https://www.nice.org.uk/News/Article/one-stop-shop-for-ai-and-digital-regulations-for-health-and-social-care-launched
  48. European union agency for cybersecurity (ENISA). Is Secure and Trusted AI Possible? The EU Leads the Way. Athens: ENISA; 2023 [cited 17 Jul 2023]. Available from: https://www.enisa.europa.eu/news/is-secure-and-trusted-ai-possible-the-eu-leads-the-way
  49. Meszaros J, Minari J, Huys I. The future regulation of artificial intelligence systems in healthcare services and medical research in the European Union. Frontiers in Genetics. 2022 2022-October-04;13. DOI: 10.3389/fgene.2022.927721
  50. Andreotta AJ, Kirkham N, Rizzi M. AI, big data, and the future of consent. AI & SOCIETY. 2022 2022/12/01;37(4):1715-28. DOI: 10.1007/s00146-021-01262-5
  51. US Food and Drug Administration (FDA). FDA Establishes New Advisory Committee on Digital Health Technologies. Silver Spring, Maryland: FDA; 2023 [cited 19 Feb 2024]. Available from: https://www.fda.gov/news-events/press-announcements/fda-establishes-new-advisory-committee-digital-health-technologies
  52. Canadian Government. The Artificial Intelligence and Data Act (AIDA) – Companion document.: Canadian Government; 2023 [cited 23 July 2023]. Available from: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
  53. The Canadian Institute for Advanced Research (CIFAR). Building a Learning Health System for Canadians: Report of the Artificial Intelligence for Health Task Force. Toronto, Canada: CIFAR; 2020 [cited 6 Mar 2024]. Available from: https://cifar.ca/cifarnews/2020/07/01/building-a-learning-health-system-for-canadians/
  54. Chae A, Yao MS, Sagreiya H, et al. Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology. Radiology. 2024 2024/01/01;310(1):e223170. DOI: 10.1148/radiol.223170
  55. Gray M, Samala R, Liu Q, et al. Measurement and Mitigation of Bias in Artificial Intelligence: A Narrative Literature Review for Regulatory Science. Clinical Pharmacology & Therapeutics. 2024 2024/04/01;115(4):687-97. DOI: https://doi.org/10.1002/cpt.3117
  56. Physician–machine partnerships boost diagnostic accuracy, but bias persists. Nature Medicine. 2024 2024/02/05. DOI: 10.1038/s41591-023-02733-6
  57. Chan SCC, Neves AL, Majeed A, et al. Bridging the equity gap towards inclusive artificial intelligence in healthcare diagnostics. BMJ. 2024;384:q490. DOI: 10.1136/bmj.q490
  58. Rajpurkar P, Chen E, Banerjee O, et al. AI in health and medicine. Nature Medicine. 2022 01 Jan 2022;28(1):31-8. DOI: 10.1038/s41591-021-01614-0
  59. Seyyed-Kalantari L, Zhang H, McDermott MBA, et al. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature Medicine. 2021 2021/12/01;27(12):2176-82. DOI: 10.1038/s41591-021-01595-0
  60. Grant C. Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism. USA: American Civil Liberties Union; 2022 [cited 13 Jul 2023]. Available from: https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism
  61. Liu X, Reigle J, Prasath VBS, et al. Artificial intelligence image-based prediction models in IBD exhibit high risk of bias: A systematic review. Computers in Biology and Medicine. 2024 2024/03/01/;171:108093. DOI: https://doi.org/10.1016/j.compbiomed.2024.108093
  62. Yu F, Moehring A, Banerjee O, et al. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nature Medicine. 2024 2024/03/01;30(3):837-49. DOI: 10.1038/s41591-024-02850-w
  63. Suran M, Hswen Y. How to Navigate the Pitfalls of AI Hype in Health Care. JAMA. 2024;331(4):273-6. DOI: 10.1001/jama.2023.23330
  64. Duffourc M, Gerke S. Generative AI in Health Care and Liability Risks for Physicians and Safety Concerns for Patients. JAMA. 2023;330(4):313-4. DOI: 10.1001/jama.2023.9630
  65. Wadden JJ. Defining the undefinable: the black box problem in healthcare artificial intelligence. Journal of Medical Ethics. 2022;48(10):764-8. DOI: 10.1136/medethics-2021-107529
  66. Marcus E, Teuwen J. Artificial intelligence and explanation: How, why, and when to explain black boxes. European Journal of Radiology. 2024;173. DOI: 10.1016/j.ejrad.2024.111393
  67. O’Dowd A. Sell access to NHS data to boost health innovation, say Blair and Hague. BMJ. 2024;384:q225. DOI: 10.1136/bmj.q225
  68. Unger M, Kather JN. Deep learning in cancer genomics and histopathology. Genome Medicine. 2024 2024/03/27;16(1):44. DOI: 10.1186/s13073-024-01315-6
  69. Zhou K, Gattinger G. The Evolving Regulatory Paradigm of AI in MedTech: A Review of Perspectives and Where We Are Today. Therapeutic Innovation & Regulatory Science. 2024 2024/05/01;58(3):456-64. DOI: 10.1007/s43441-024-00628-3
  70. Adler-Milstein J, Redelmeier DA, Wachter RM. The Limits of Clinician Vigilance as an AI Safety Bulwark. JAMA. 2024. DOI: 10.1001/jama.2024.3620
  71. Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024;385:e078378. DOI: 10.1136/bmj-2023-078378
  72. Reddy S. Navigating the AI Revolution: The Case for Precise Regulation in Health Care. J Med Internet Res. 2023;25:e49989. DOI: 10.2196/49989
  73. Aristidou A, Jena R, Topol EJ. Bridging the chasm between AI and clinical implementation. The Lancet. 2022;399(10325):620. DOI: 10.1016/S0140-6736(22)00235-5
  74. Han H, Liu X. The challenges of explainable AI in biomedical data science. BMC Bioinformatics. 2022 2022/01/20;22(12):443. DOI: 10.1186/s12859-021-04368-1
  75. Anderer S, Hswen Y. AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says. JAMA. 2024. DOI: 10.1001/jama.2023.22981
  76. Kahn J. What’s wrong with “explainable A.I.”. New York: Fortune; 2022 [cited 6 Mar 2024]. Available from: https://fortune.com/2022/03/22/ai-explainable-radiology-medicine-crisis-eye-on-ai/
  77. Kuehn BM. Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms. JAMA. 2024;331(6):463-5. DOI: 10.1001/jama.2023.25530
  78. Ktena I, Wiles O, Albuquerque I, et al. Generative models improve fairness of medical classifiers under distribution shifts. Nature Medicine. 2024 2024/04/10. DOI: 10.1038/s41591-024-02838-6
  79. Anderer S, Hswen Y. “Scalable Privilege”—How AI Could Turn Data From the Best Medical Systems Into Better Care for All. JAMA. 2024;331(6):459-62. DOI: 10.1001/jama.2023.21719
  80. Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Frontiers in Surgery. 2022 2022-March-14;9. DOI: 10.3389/fsurg.2022.862322
  81. Saenz A, Chen E, Marklund H, et al. The MAIDA initiative: establishing a framework for global medical-imaging data sharing. The Lancet Digital Health. 2024;6(1):e6-e8. DOI: 10.1016/S2589-7500(23)00222-4
  82. eHealth NSW. Data Lake. Sydney: NSW Health; 2024 [cited 7 Mar 2024]. Available from: https://www.ehealth.nsw.gov.au/solutions/data-analytics/data-management/data-lake
  83. Ganapathi S, Palmer J, Alderman JE, et al. Tackling bias in AI health datasets through the STANDING Together initiative. Nature Medicine. 2022 2022/11/01;28(11):2232-3. DOI: 10.1038/s41591-022-01987-w
  84. Fang C, Dziedzic A, Zhang L, et al. Decentralised, collaborative, and privacy-preserving machine learning for multi-hospital data. eBioMedicine. 2024;101. DOI: 10.1016/j.ebiom.2024.105006
  85. Nature Research. Why Japan is a leader in radiological research. London: Nature; 2024 [cited 08 Apr 2024]. Available from: https://www.nature.com/articles/d42473-023-00449-2
  86. Shah NH, Halamka JD, Saria S, et al. A Nationwide Network of Health AI Assurance Laboratories. JAMA. 2024;331(3):245-9. DOI: 10.1001/jama.2023.26930
  87. University of Melbourne. Artificial Intelligence Assurance Lab. Melbourne: University of Melbourne; 2024 [cited 6 Mar 2024]. Available from: https://cis.unimelb.edu.au/ai-assurance
  88. Jones E. Explainer: What is a foundational model. London: Ada Lovelace Institute; 2023 [cited 20 Apr 2024]. Available from: https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/
  89. Kim C, Gadgil SU, DeGrave AJ, et al. Transparent medical image AI via an image–text foundation model grounded in medical literature. Nature Medicine. 2024 2024/04/01;30(4):1154-65. DOI: 10.1038/s41591-024-02887-x
  90. Medicine and Healthcare products Regulatory Agency (MHRA). Good Machine Learning Practice for Medical Device Development: Guiding Principles. London: MHRA; 2024 [cited 22 Feb 2024]. Available from: https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-development-guiding-principles
  91. Morley J, Hamilton N, Floridi L. Selling NHS patient data. BMJ. 2024;384:q420. DOI: 10.1136/bmj.q420
  92. OECD Policy Observatory. Why policymakers worldwide must prioritise security for AI. Paris: Organisation for Economic Co-operation and Development; 2023 [cited 17 Oct 2023]. Available from: https://oecd.ai/en/wonk/policymakers-prioritise-security
  93. World Health Organization (WHO). WHO calls for safe and ethical AI for health. Geneva: WHO; 2023 [cited 24 Jul 2023]. Available from: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
  94. Australian Signals Directorate. Engaging with Artificial Intelligence. Canberra: Australian Government; 2024 [cited 6 Mar 2024]. Available from: https://www.cyber.gov.au/resources-business-and-government/governance-and-user-education/governance/engaging-with-artificial-intelligence
  95. Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Computers in Biology and Medicine. 2023 2023/05/01/;158:106848. DOI: 10.1016/j.compbiomed.2023.106848
  96. Office of the Victorian Information Commissioner. Artificial Intelligence – Understanding Privacy Obligations. Melbourne: Victorian Government; 2021 [cited 6 Mar 2024]. Available from: https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/
  97. National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework. Washington, DC: US Government; 2023 [cited 6 Mar 2024]. Available from: https://www.nist.gov/itl/ai-risk-management-framework
  98. The Office of the Privacy Commissioner of Canada (OPC). Principles for responsible, trustworthy and privacy-protective generative AI technologies. Quebec: OPC; 2023 [cited 6 Mar 2024]. Available from: https://priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai/
  99. He X, Zheng X, Ding H. Existing Barriers Faced by and Future Design Recommendations for Direct-to-Consumer Health Care Artificial Intelligence Apps: Scoping Review. J Med Internet Res. 2023;25:e50342. DOI: 10.2196/50342
  100. Chew HSJ, Achananuparp P. Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. J Med Internet Res. 2022 2022/1/14;24(1):e32939. DOI: 10.2196/32939
  101. Sullivan C, Pointon K. Artificial intelligence in health care: nothing about me without me. Medical Journal of Australia. 2024 2024/04/17;n/a(n/a). DOI: https://doi.org/10.5694/mja2.52282
  102. Han R, Acosta JN, Shakeri Z, et al. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. The Lancet Digital Health. 2024;6(5):e367-e73. DOI: 10.1016/S2589-7500(24)00047-5
  103. Banerjee S. Involving patients in AI research to build trustworthy systems. Cambridge, UK: University of Cambridge; 2022 [cited 6 Mar 2024]. Available from: https://acceleratescience.github.io/machine-learning/2022/11/08/involving-patients-in-ai-research-to-build-trustworthy-systems.html
  104. Rogan J, Bucci S, Firth J. Health Care Professionals’ Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health. 2024;11:e49577. DOI: 10.2196/49577
  105. Neri E, Aghakhanyan G, Zerunian M, et al. Explainable AI in radiology: a white paper of the Italian Society of Medical and Interventional Radiology. La radiologia medica. 2023 2023/06/01;128(6):755-64. DOI: 10.1007/s11547-023-01634-5
  106. Editorial. Data sovereignty in genomics and medical research. Nature Machine Intelligence. 2022 2022/11/01;4(11):905-6. DOI: 10.1038/s42256-022-00578-1
  107. Australian Alliance for Artificial Intelligence in Healthcare (AAAIH). A roadmap for artificial intelligence in healthcare for Australia. Sydney: AAAIH; 2021 [cited 24 Jul 2023]. Available from: https://aihealthalliance.org/wp-content/uploads/2021/12/AAAiH_Roadmap_1Dec2021_FINAL.pdf
  108. Banerjee S, Alsop P, Jones L, et al. Patient and public involvement to build trust in artificial intelligence: A framework, tools, and case studies. Patterns. 2022;3(6). DOI: 10.1016/j.patter.2022.100506
  109. Wrightson-Hester AR, Anderson G, Dunstan J, et al. An Artificial Therapist (Manage Your Life Online) to Support the Mental Health of Youth: Co-Design and Case Series. JMIR Hum Factors. 2023 Jul 21;10:e46849. DOI: 10.2196/46849
  110. Easton K, Potter S, Bec R, et al. A Virtual Agent to Support Individuals Living With Physical and Mental Comorbidities: Co-Design and Acceptability Testing. J Med Internet Res. 2019 May 30;21(5):e12996. DOI: 10.2196/12996
  111. Hudson S, Nishat F, Stinson J, et al. Perspectives of Healthcare Providers to Inform the Design of an AI-Enhanced Social Robot in the Pediatric Emergency Department. Children (Basel). 2023 Sep 6;10(9). DOI: 10.3390/children10091511
  112. Yu J, Shen N, Conway S, et al. A holistic approach to integrating patient, family, and lived experience voices in the development of the BrainHealth Databank: a digital learning health system to enable artificial intelligence in the clinic. Front Health Serv. 2023;3:1198195. DOI: 10.3389/frhs.2023.1198195
  113. Banerjee S. Patient and public involvement to build trust in artificial intelligence: a framework, tools and case studies. UK: GitHub; 2022 [cited 6 Mar 2024]. Available from: https://github.com/neelsoumya/outreach_ppi
  114. van de Loo B, Linn AJ, Medlock S, et al. AI-based decision support to optimize complex care for preventing medication-related falls. Nature Medicine. 2024 2024/01/25. DOI: 10.1038/s41591-023-02780-z
  115. Hua D, Petrina N, Young N, et al. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med. 2024 Jan;147:102698. DOI: 10.1016/j.artmed.2023.102698
  116. MIT Technology Review Insights. The AI Effect: How artificial intelligence is making health care more human. Cambridge, MA: MIT; 2019 [cited 14 Nov 2023]. Available from: https://www.gehealthcare.co.uk/-/jssmedia/61b7b6b1adc740e58d4b86eef1bb6604.pdf
  117. Pelly M, Fatehi F, Liew D, et al. Artificial intelligence for secondary prevention of myocardial infarction: A qualitative study of patient and health professional perspectives. International Journal of Medical Informatics. 2023 2023/05/01/;173:105041. DOI: 10.1016/j.ijmedinf.2023.105041
  118. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019 Jun;6(2):94-8. DOI: 10.7861/futurehosp.6-2-94
  119. Kovoor JG, Bacchi S, Sharma P, et al. Artificial intelligence for surgical services in Australia and New Zealand: opportunities, challenges and recommendations. Medical Journal of Australia. 2024 2024/02/06;n/a(n/a). DOI: https://doi.org/10.5694/mja2.52225
  120. Henry KE, Kornfield R, Sridharan A, et al. Human–machine teaming is key to AI adoption: clinicians’ experiences with a deployed machine learning system. npj Digital Medicine. 2022 2022/07/21;5(1):97. DOI: 10.1038/s41746-022-00597-7
  121. The King's Fund. Preparing staff for the digital NHS of the future. London: The King's Fund,; 2021 [cited 7 Mar 2024]. Available from: https://www.kingsfund.org.uk/events/preparing-staff-digital-nhs-future
  122. Visram S, Rogers Y, Sebire NJ. Developing a conceptual framework for the early adoption of healthcare technologies in hospitals. London: Nature Medicine; 2024 [
  123. Mandl KD, Gottlieb D, Mandel JC. Integration of AI in healthcare requires an interoperable digital data ecosystem. Nature Medicine. 2024 2024/01/30. DOI: 10.1038/s41591-023-02783-w
  124. Mistry P, Maguire D, Chickwira K, et al. Interoperability is more than technology. London: King's Fund; 2022 [cited 6 Mar 2024]. Available from: https://www.kingsfund.org.uk/insight-and-analysis/reports/digital-interoperability-technology
  125. Sodeau A, Fox A. Influence of nurses in the implementation of artificial intelligence in health care: a scoping review. Australian Health Review. 2022;46(6):736-41.
  126. Australian Medical Association. Artificial Intelligence in Healthcare. Australia: AMA; 2023 [cited 19 Dec 2023]. Available from: https://www.ama.com.au/articles/artificial-intelligence-healthcare
  127. Zicari RV, Ahmed S, Amann J, et al. Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier. Frontiers in Human Dynamics. 2021 2021-July-13;3. DOI: 10.3389/fhumd.2021.688152
  128. European Parliament. Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts. European Parliament; 2022 [cited 13 Jul 2023]. Available from: https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf
  129. Ganapathi S, Duggal S. Exploring the experiences and views of doctors working with Artificial Intelligence in English healthcare; a qualitative study. PLoS One. 2023;18(3):e0282415. DOI: 10.1371/journal.pone.0282415
  130. Moulds A, Horton T. What do technology and AI mean for the future of work in health care? London: The Health Foundation; 2023 [cited 7 Mar 2024]. Available from: https://www.health.org.uk/publications/long-reads/what-do-technology-and-ai-mean-for-the-future-of-work-in-health-care
  131. Organisation for Economic Co-operation and Development (OECD). Using AI in the workplace. Paris: OECD; 2024 [cited 28 Mar 2024]. Available from: https://www.oecd.org/publications/using-ai-in-the-workplace-73d417f9-en.htm
  132. Spatharou A, Hieronimus S, Jenkins J. Transforming healthcare with AI: The impact on the workforce and organizations. New York: McKinsey & Company,; 2020 [cited 17 Jul 2023]. Available from: https://www.mckinsey.com/industries/healthcare/our-insights/transforming-healthcare-with-ai
  133. Topol E. Preparing the healthcare workforce to deliver the digital future. London: Health Education England; 2019 [cited 11 Jan 2024]. Available from: https://topol.hee.nhs.uk/
  134. O'Connor S, Vercell A, Wong D, et al. The application and use of artificial intelligence in cancer nursing: A systematic review. European Journal of Oncology Nursing. 2024;68. DOI: 10.1016/j.ejon.2024.102510
  135. Silverberg M. Preparing Radiology Trainees for AI and ChatGPT. Oak Brook, IL: Radiological Society of North America; 2023 [cited 12 Feb 2024]. Available from: https://www.rsna.org/news/2023/july/radiology-trainees-ai-and-chatgpt
  136. NHS Digital Academy. Developing healthcare workers’ confidence in artificial intelligence. London: NHS England; 2023 [cited 11 Jan 2024]. Available from: https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/digital-literacy
  137. Schuur F, Rezazade Mehrizi MH, Ranschaert E. Training opportunities of artificial intelligence (AI) in radiology: a systematic review. European Radiology. 2021 2021/08/01;31(8):6021-9. DOI: 10.1007/s00330-020-07621-y
  138. Chebrolu K, Shukla M, Varla H, et al. Health care’s quest for an enterprisewide AI strategy. United States: Deloitte Insights; 2022 [cited 17 Jul 2023]. Available from: https://www2.deloitte.com/us/en/insights/industry/health-care/ai-led-transformations-in-health-care.html
  139. Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implementation Science. 2024 2024/03/15;19(1):27. DOI: 10.1186/s13012-024-01357-9
  140. KPMG. Future of Work: Understanding the impacts of technology on the health workforce. Sydney: NSW Health; 2020 [cited 22 Feb 2024]. Available from: https://www.health.nsw.gov.au/workforce/horizons/Documents/future-of-work-healthcare-workforce.PDF
  141. American Hospital Association (AHA) Centre for Health Innovation. AI and the Health Workforce. Washington, DC: AHA; 2019 [cited 6 Mar 2024]. Available from: https://www.aha.org/center/emerging-issues/market-insights/ai/ai-and-health-care-workforce
  142. Hamoni R, Lin O, Matthews M, et al. Building Canada’s Future AI Workforce: In the Brave New (Post-Pandemic) World. Ottawa: The Information and Communications Technology Council, Canada; 2021 [cited 6 Mar 2024]. Available from: https://ictc-ctic.ca/reports/building-canadas-future-ai-workfore

Living evidence tables include some links to low quality sources and an assessment of the original source has not been undertaken. Sources are monitored regularly but due to rapidly emerging information, tables may not always reflect the most current evidence. The tables are not peer reviewed, and inclusion does not imply official recommendation nor endorsement of NSW Health.

Last updated on 6 May 2024

Back to top