< Back

May 2020: The Laws and Ethics of Algorithmic Bias in Healthcare – New Project kicks off

The Center of Advanced Studies in Biomedical Innovation Law (CeBIL) and the Department of Computer Science (DIKU) and its Quality of Life Technologies Lab are excited to announce that their research proposal on AI@CARE: Laws and Ethics and Algorithmic Bias in Healthcare has been awarded funding from the UCPH Data+ pool.

The UCPH Data + Funding was established in 2019 to promote the integration of data science in other scientific disciplines, “unlocking the potential for innovative and risk-based research”.

Artificial Intelligence (AI) has the potential to “glocalize” healthcare by bringing new diagnosis and treatments to neglected populations. Yet, many infrastructural, legal and ethical issues remain, such as the risk that AI embraces and spreads bias and discrimination.

The project AI@CARE gathers experts in digital health and law to conceptualize and computationally model how bias and discrimination occur within medical AI, the reasons for these, and how the problem could be best addressed within enforceable legal frameworks and with reliable technological support (checklists and design blueprints) to enhance the democratization of medicine. The starting point is an existing representative large scale, longitudinal dataset for the Danish population and a set of algorithms for assessing their risk of chronic illness in the long term.

“AI@CARE will consist of three interrelated subprojects, researching in depth both qualitative and quantitative aspects of the bias and discrimination”, says Prof. Katarzyna Wac, co-principal investigator at the Quality of Life Technologies Lab. While subproject 1 will leverage the existing theories from law and formulates new theories and approaches on bias and discrimination, subproject 2 will focus on the operationalization of these theories from a computer science perspective. In the end, Subproject 3 will merge subprojects 1 and 2 to develop ethical guidelines, legal reforms and ‘bias awareness checklists’ for algorithm development and design blueprints for healthcare AI solutions.

According to Prof. Timo Minssen, co-principal investigator at CeBIL, “This combination will allow us to provide one of the first conceptual accounts of algorithmic/infrastructural, legal and ethical factors that are relevant to bias and discrimination scenarios in healthcare.

The project started on 1 April and will last for three years involving post-doc Audrey Lebret at CeBIL and PhD Sofia Laghouila at the QoL Lab.

Project Abstract