AI Research Ethics
Symposium, August 7-9, 2024

Idea and Program for the Symposium on AI Research Ethics

The symposium on AI research ethics will take place from August 7-9, 2024, as part of the “Thematic Week on Science Ethics: Ethical Dimensions of AI Research and AI in Research” in Hanover and is one of four symposia held during the week. Our symposium will specifically focus on the perspective of ethics committees and is titled: “Challenges of AI for the Work of Research Ethics Committees: Issues of Evaluation, Resilience of Professional Standards, Procedural Questions.”

The starting point of our considerations is that AI ethics must be implemented, and that only the concrete procedural steps and review questions of implementing future AI ethics will give it its shape. Based on this assumption, the aim of the symposium is to discuss the work of research ethics committees in a “realistic” manner, based on experience (under the conditions of the real existing landscape of institutionalized committees). How should and how can AI research projects realistically be evaluated? For these questions, we need a forward-looking answer that is appropriate to the peculiarities and potentials of the technology, but also scientifically sound and practical.

We are organizing this symposium against the background of our own experience in committee work: Petra Gehring founded the interdisciplinary ethics committee at TU Darmstadt and was its long-time chair, and I myself worked there as managing director. As the working mode of the symposium, instead of discussing generally what is desirable or feasible, we want to discuss typical problems associated with the evaluation and assessment of research projects on or with AI methods from the concrete practice of research ethics committees – thus based on experience and “case-driven.” We see great challenges in this field, affecting both genuinely computer science-based research projects and research projects in many other disciplines that use AI as a “tool.” In both fields – projects from computer science as well as projects from other disciplines – questions arise about evaluation criteria and their suitability in application, as well as procedural questions given the existing “committee landscape” in Germany (i.e., committee responsibilities, committee compositions, type of required expertise).

The exchange during the symposium should be based on (anonymous and abstracted) case examples or case-related considerations that we ask the approximately 25-30 participants to bring and also send to us in advance. The presentation of such a “case” (with the corresponding considerations) should then take around 5 minutes (max. two slides) on site. Thematically appropriate presentations will be given in “triplets” one after the other and then discussed intensively, including generalizing discussions. This format allows for relating different perspectives of practitioners from ethics committees and experts in science and research ethics and creating a shared discussion space for the challenges posed by AI for the work of research ethics committees – at least that is the idea. If you can contribute case examples or considerations, that is very welcome. You are also welcome to participate in the discussions without your own cases. For the submission of cases, we will provide all participants with a template at a later date.

In the three days from August 7-9, we want to divide the exchange into three parts, which will have more or less significant overlaps:

1. Criteria

The fundamental criteria challenges of ethical review include problem areas such as the reproducibility or opacity of AI process steps (“black box”) and approaches to AI explainability, data bias, and discrimination (which, however, are more beneficial to product design than to their application to research processes). Data protection aspects of computer science AI research (and the argument of the GDPR as a disadvantage in research competition) also raise questions. In non-informatic academic cultures, the problem of epistemic opacity of AI is exacerbated by the automation gain within user-friendly “tools.” Here, in times of the product family around “Chat GPT,” a whole wave of commercial offerings is coming to research. How ethical research standards can be transferred to the experimental use of such tools is unclear. Viable paths, also for the protection of data of test subjects and researchers, must be weighed based on examples. The possible dual-use nature of AI research projects is also important.

2. Professional Standards

In a second part, we will discuss procedural and pragmatic challenges of the work of ethics committees under the keyword “resilience of professional standards.” This involves the quality of FE applications, communication with applicants, and possible shifts in professional standards that researchers themselves (here: as an argument to ethics committees) put forward. Procedurally and pragmatically, AI research ethics is under pressure to justify itself. How can a flow of research ethics requirements into a changing normality of research be organized, ensuring compliance with standards in individual cases?

3. Procedures

So far, there are hardly any computer science ethics committees in Germany, and with the emerging ubiquitous use of AI tools, the need for AI expertise in (all?) non-informatic types of ethics committees is also growing. Regarding the question of how ethical reviews can be well organized in individual cases (more interdisciplinary committees, involvement of two committees, selective co-optation of expertise), there is likely to be a great practical need for exchange.

The exchange is intended to serve, at the end and in the aftermath of the symposium, to revise and differentiate the ZEVEDI guideline on AI research ethics, conceived as a living document (link to the guideline).



Contact

Andreas Brenneis ()

Petra Gehring ()