Speaker
Description
The utilisation of Artificial Intelligence (AI) in medical practice is on the rise due to various factors. Firstly, they can process large datasets and recognise complex relationships that may be difficult for humans to discern in the enormous amount of medical data. Therefore AI systems can enhance the efficiency and accuracy of medical processes, thus saving resources.
Nevertheless, the utilisation of AI in medical practice is not without risk. Automated decision-making processes may contain unconscious biases that could result in disparate treatment and discrimination against specific patient groups. The measurement of fairness in AI is an evolving field of research, with no standardised definition or method for assessing fairness.
This contribution discusses global sensitivity analysis as a tool for evaluating the fairness of AI deployment in medical practice. A concrete example is used to demonstrate how sensitivity analysis can be performed in R to identify potential discrimination. The results emphasise the need to implement mechanisms to ensure fairness in the use of AI in medical practice and raise awareness that human biases may also influence automated decision-making processes.
Type of presentation | Poster |
---|