Navigation auf uzh.ch

Suche

Rechtswissenschaftliche Fakultät Lehrstuhl Hermstrüwer

Prof. Dr. Dr. Yoan Hermstrüwer

Rämistrasse 74, 8001 Zürich
RAI-H-105 (Professor)
RAI-H-107 (Assistenz | Assistants)
Karte | Map

Tel. Professor: +41 (44) 634 43 90
Tel. Assistenz | Assistants: +41 (44) 634 43 91
E-Mail Lehrstuhl | Chair: lst.hermstruewer@ius.uzh.ch

 

News

Weiterführende Informationen

Öffentliches Recht als Verhaltensordnung

Behavioral law hat sich von punktuellen Hinterfragungen innerhalb der ökonomischen Analyse zum Fundament einer Annahmenlehre für die gesamte Rechtswissenschaft entwickelt. Damit Theorie und Praxis rechtlicher Steuerung an den Wissensbestand anknüpfen können, muss man empirisches Schrifttum für juristische Zwecke erschließen. Doch auch das Recht selbst ist aus dem Blickwinkel des Denkens und Handelns von Individuen neu zu befragen. Dabei gibt es weit mehr zu entdecken als seine Konzepte vom psychischen Realbereich: Am Ende geht es um das Funktionieren einer aus menschlichen Urteilen und Entscheidungen zusammengesetzten staatlichen Praxis. Der Band widmet sich diesen Aufgaben exemplarisch für die Kernfächer des öffentlichen Rechts als »allgemeinen Teil« der rechtlichen Verhaltenssteuerung in Deutschland. Er arbeitet systematisch den Theorienkanon der verhaltenswissenschaftlichen Analyse auf, erschließt Verbindungen innerhalb des Rechtsstoffs und fragt nach neuen Möglichkeiten dogmatischer Konstruktion und Strukturierung.

Das Buch erscheint im September 2024.

Mohr Siebeck Verlag

Fair Governance with Humans and Machines

Abstract

How fair do people perceive government decisions based on algorithmic predictions? And to what extent can the government delegate decisions to machines without sacrificing perceived procedural fairness? Using a set of vignettes in the context of predictive policing, school admissions, and refugee-matching, we explore how different degrees of human–machine interaction affect fairness perceptions and procedural preferences. We implement four treatments varying the extent of responsibility delegation to the machine and the degree of human involvement in the decision-making process, ranging from full human discretion, machine-based predictions with high human involvement, machine-based predictions with low human involvement, and fully machine-based decisions. We find that machine-based predictions with high human involvement yield the highest and fully machine-based decisions the lowest fairness scores. Different accuracy assessments can partly explain these differences. Fairness scores follow a similar pattern across contexts, with a negative level effect and lower fairness perceptions of human decisions in the context of predictive policing. Our results shed light on the behavioral foundations of several legal human-in-the-loop rules.

Link to the article

School Choice with Consent: An Experiment

Abstract

Public school choice often yields student assignments that are neither fair nor efficient. The efficiency-adjusted deferred acceptance mechanism (EADAM) allows students to consent to waive priorities that have no effect on their assignments. A burgeoning recent literature places EADAM at the centre of the trade-off between efficiency and fairness in school choice. Meanwhile, the Flemish Ministry of Education has taken the first steps to implement this algorithm in Belgium. We provide the first experimental evidence on the performance of EADAM against the celebrated deferred acceptance mechanism (DA). We find that both efficiency and truth-telling rates are higher under EADAM than under DA, even though EADAM is not strategy-proof. When the priority waiver is enforced, efficiency further increases, while truth-telling rates decrease relative to the EADAM variants where students can dodge the waiver. Our results challenge the importance of strategy-proofness as a prerequisite for truth-telling and portend a new trade-off between efficiency and vulnerability to preference manipulation.

Link to the article

School Choice with Consent: An Experiment

When do students game the school admissions system? It depends. Complexity may push students to refrain from manipulating their school rankings, even when the admissions system is manipulable.

Fair Governance with Humans and Machines

Is an AI really less fair than a human decision-maker? There is a human-AI fairness gap. But an AI doing most of the work is considered as fair as a human doing all the work.

Which decisions are you OK with AI making?

Interview on a study conducted at the Max Planck Institute for Research on Collective Goods. A follow-up project involving a collaboration between the University of Zurich, ETH Zurich, Georgetown University, the University of Hong Kong and the Max Planck Institute is in the making.

Link to the interview