Objectives/Purpose
This study explored the acceptability and ethicality of an artificial intelligence (AI)-based postoperative risk calculator integrated with a clinical decision support system (CDSS) to identify patients at high risk of complications following cancer surgery. The aim was to examine stakeholder views across domains of the Theoretical Framework of Acceptability (TFA) and inform future implementation.
Sample and Setting
Three focus groups with 15 consumers and five workshops with 19 health professionals were conducted at Peter MacCallum Cancer Centre. Consumers varied in age, cancer diagnosis, surgical experience, and familiarity with AI. Health professionals included junior and senior surgeons, anaesthesiologists, nurses, allied health practitioners, and health data analysts.
Procedures
Focus groups and workshops followed a semi-structured guide informed by the TFA and value-sensitive design principles. Participants discussed the acceptability, ethical implications, and perceived impact of the risk calculator. Transcripts were thematically analysed using deductive and inductive approaches.
Results
Both groups recognised the potential of the AI tool to enhance personalised care (perceived effectiveness) but emphasised that it must augment—not replace—clinical judgement. Consumers prioritised autonomy and consent, while clinicians raised concerns about fairness, transparency, and algorithmic bias (ethicality). Trust in the tool depended on human oversight; confidence to override AI outputs was stronger among senior clinicians. Intervention coherence varied, with clinicians understanding system functionality and consumers expressing uncertainty around practical use. Both groups identified education and training as critical enablers of confident, appropriate adoption.
Conclusion and Clinical Implications
Stakeholder-informed design and implementation strategies are essential to ensure ethical, acceptable, and clinically meaningful integration of AI-based postoperative risk tools. This study highlights the critical role of implementation science in supporting the responsible introduction of AI innovations. Understanding acceptability is foundational to building real-world evidence and ensuring AI tools are introduced in ways that are trusted, user-informed, and context-sensitive.