3,8 billion people worldwide lack access to basic healthcare. Artificial Intelligence (AI) has the potential to “glocalize” healthcare by bringing new diagnosis and treatments to neglected populations. Yet, many infrastructural, legal and ethical issues remain, such as the risk that AI embraces and propagates bias and discrimination. AI@CARE gathers experts in digital health and law to conceptualize and computationally model how bias and discrimination occur within medical AI. AI@CARE consists of 3 interrelated subprojects (SPs). SP1 models categories of biases and discriminatory algorithmic decision-making. SP2 addresses the adequacy of ethical and legal frameworks in relation to hidden or new forms of biases. SP3 merges SP1-SP2 to develop ethical guidelines, legal reforms and ‘bias awareness checklists’ for algorithm development and design blueprints for healthcare AI solutions.
Katarzyna Wac, Professor of Computer Science, Quality of Life Technologies Lab, Human-Centered Computing /Dept of Computer Science (DIKU), Faculty of Science, KU
Timo Minssen, Professor of Law, Center on Advanced Studies on Biomedical Innovation Law (CeBIL), Faculty of Law, KU