Within academic surgery, inherent bias and inequities exist, which may influence clinical management. Since the development of pulse oximetry technology, its validation has been in populations that were not racially diverse, yet its use remains an important component in delivering effective healthcare in our modern multinational population. As digitization continues exponentially, these inequities further grow with large AI datasets suffering from similar limitations, which promise to support clinical decision making and guide public health policy making through data-driven approaches. Moreover, novel wearable sensors face similar validation concerns. The clinical significance of potential bias in the emergence of digital health and artificial intelligence remains unknown.
Clinical research is at risk of recruitment bias towards participants who fit a Western, Educated, Industrialized, Rich, Democratic (WEIRD) profile. Therefore, there remains a disproportionate representation of the human population as a whole. AI algorithms which are subsequently trained in the context of collated data from these scientific studies will, in turn, be biased. Standardization of data collection and reporting can support interoperability and measure inclusivity, highlighting areas of bias and further research. Recent advances in the development of AI specific reporting guidelines are a welcome step to help tackle these inequities.
As industry continue to develop novel wearable solutions, there remains potential for validation studies to improve workflow efficiency within healthcare. Trials should aim to test the integration of such technologies pragmatically with human behaviour integration, alongside robust randomized trials with broad inclusive criteria to test validation.
We welcome submissions from all geographic regions with a range of appropriate methodological approaches including quantitative, qualitative, perspective pieces, and systematic reviews. We welcome the following submissions:
· Research highlighting the current limitations in digital health and AI.
· Research describing human behaviors or facilitators and barriers which influence inherent bias and possible strategies.
· Research recommending implementation strategies for current digital solutions.
· Research aiming to address bias in either big data, wearable sensors, other novel digital solutions (mhealth) or AI in surgery.
Within academic surgery, inherent bias and inequities exist, which may influence clinical management. Since the development of pulse oximetry technology, its validation has been in populations that were not racially diverse, yet its use remains an important component in delivering effective healthcare in our modern multinational population. As digitization continues exponentially, these inequities further grow with large AI datasets suffering from similar limitations, which promise to support clinical decision making and guide public health policy making through data-driven approaches. Moreover, novel wearable sensors face similar validation concerns. The clinical significance of potential bias in the emergence of digital health and artificial intelligence remains unknown.
Clinical research is at risk of recruitment bias towards participants who fit a Western, Educated, Industrialized, Rich, Democratic (WEIRD) profile. Therefore, there remains a disproportionate representation of the human population as a whole. AI algorithms which are subsequently trained in the context of collated data from these scientific studies will, in turn, be biased. Standardization of data collection and reporting can support interoperability and measure inclusivity, highlighting areas of bias and further research. Recent advances in the development of AI specific reporting guidelines are a welcome step to help tackle these inequities.
As industry continue to develop novel wearable solutions, there remains potential for validation studies to improve workflow efficiency within healthcare. Trials should aim to test the integration of such technologies pragmatically with human behaviour integration, alongside robust randomized trials with broad inclusive criteria to test validation.
We welcome submissions from all geographic regions with a range of appropriate methodological approaches including quantitative, qualitative, perspective pieces, and systematic reviews. We welcome the following submissions:
· Research highlighting the current limitations in digital health and AI.
· Research describing human behaviors or facilitators and barriers which influence inherent bias and possible strategies.
· Research recommending implementation strategies for current digital solutions.
· Research aiming to address bias in either big data, wearable sensors, other novel digital solutions (mhealth) or AI in surgery.