Pubblicazioni
Liability for robots II: an economic analysis
A. Guerra; F. Parisi; D. Pi (2021)
2021
Literature
Artificial Intelligence and new technologies regulation
This is the second of two companion papers that discuss accidents caused by robots. In the first paper (Guerra et al., 2021), we presented the novel problems posed by robot accidents, and assessed the related legal approaches and institutional opportunities. In this paper, we build on the previous analysis to consider a novel liability regime, which we refer to as ‘manufacturer residual liability’ rule. This makes operators and victims liable for accidents due to their negligence – hence, incentivizing them to act diligently; and makes manufacturers residually liable for non-negligent accidents – hence, incentivizing them to make optimal investments in R&D for robots' safety. In turn, this rule will bring down the price of safer robots, driving unsafe technology out of the market. Thanks to the percolation effect of residual liability, operators will also be incentivized to adopt optimal activity levels in robots' usage.
Liability for robots I: legal challenges
A. Guerra; F. Parisi; D. Pi (2021)
2021
Literature
Artificial Intelligence and new technologies regulation
In robot torts, robots carry out activities that are partially controlled by a human operator. Several legal and economic scholars across the world have argued for the need to rethink legal remedies as we apply them to robot torts. Yet, to date, there exists no general formulation of liability in case of robot accidents, and the proposed solutions differ across jurisdictions. We proceed in our research with a set of two com-panion papers. In this paper, we present the novel problems posed by robot accidents, and assess the legal challenges and institutional prospects that policymakers face in the regulation of robot torts. In the companion paper, we build on the present analysis and use an economic model to propose a new liability regime that blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts.
Corruption from a Regulatory Perspective
M. De Benedetto (2021)
2021
Literature
Corruption prevention
This book seeks to enrich and, in some cases, reverse current ideas on corruption and its prevention. It is a long held belief that sanctions are the best guard against corrupt practise. This innovative work argues that in some cases sanctions paradoxically increase corruption and that controls provide opportunities for corrupt transactions. Instead it suggests that better regulation and responsive enforcement, not sanctions, offer the most effective response to corruption. Taking both a theoretical and applied approach, it examines the question from a global perspective, drawing on in particular a regulatory perspective, to provide a model for tackling corrupt practises.
Eight Ways to Institutionalise Deliberative Democracy | OECD Public Governance Policy Paper
OECD (2021)
2021
Documents
Participative and deliberative democracy
This guide for public officials and policy makers outlines eight models for institutionalising representative public deliberation to improve collective decision making and strengthen democracy. Deliberative bodies like citizens’ assemblies create the democratic spaces for broadly representative groups of people to learn together, grapple with complexity, listen to one another, and find common ground on solutions. Increasingly, public authorities are reinforcing democracy by making use of deliberative processes in a structural way, beyond one-off initiatives that are often dependent on political will. The guide provides examples of how to create structures that allow representative public deliberation to become an integral part of how certain types of public decisions are taken.
Algorithmic disclosure rules
Fabiana Di Porto (2021)
2021
Literature
Artificial Intelligence and new technologies regulation
During the past decade, a small but rapidly growing number of Law&Tech scholars have been applying algorithmic methods in their legal research. This Article does it too, for the sake of saving disclosure regulation failure: a normative strategy that has long been considered dead by legal scholars, but conspicuously abused by rule-makers. Existing proposals to revive disclosure duties, however, either focus on the industry policies (e.g. seeking to reduce consumers’ costs of reading) or on rulemaking (e.g. by simplifying linguistic intricacies). But failure may well depend on both. Therefore, this Article develops a `comprehensive approach', suggesting to use computational tools to cope with linguistic and behavioral failures at both the enactment and implementation phases of disclosure duties, thus filling a void in the Law & Tech scholarship. Specifically, it outlines how algorithmic tools can be used in a holistic manner to address the many failures of disclosures from the rulemaking in parliament to consumer screens. It suggests a multi-layered design where lawmakers deploy three tools in order to produce optimal disclosure rules: machine learning, natural language processing, and behavioral experimentation through regulatory sandboxes. To clarify how and why these tasks should be performed, disclosures in the contexts of online contract terms and privacy online are taken as examples. Because algorithmic rulemaking is frequently met with well-justified skepticism, problems of its compatibility with legitimacy, efficacy and proportionality are also discussed.
'I See Something You Don't See'. A Computational Analysis of the Digital Services Act and the Digital Markets Act
F. Di Porto; T. Grote; G. Volpi (2021)
2021
Literature
Digital markets
In its latest proposals, the Digital Markets Act (DMA) and Digital Services Act (DSA), the European Commission puts forward several new obligations for online intermediaries, especially large online platforms and “gatekeepers.” Both are expected to serve as a blueprint for regulation in the United States, where lawmakers have also been investigating competition on digital platforms and new antitrust laws passed the House Judiciary Committee as of June 11, 2021. This Article investigates whether all stakeholder groups share the same understanding and use of the relevant terms and concepts of the DSA and DMA. Leveraging the power of computational text analysis, we find significant differences in the employment of terms like “gatekeepers,” “self-preferencing,” “collusion,” and others in the position papers of the consultation process that informed the drafting of the two latest Commission proposals. Added to that, sentiment analysis shows that in some cases these differences also come with dissimilar attitudes. While this may not be surprising for new concepts such as gatekeepers or self-preferencing, the same is not true for other terms, like “self-regulatory,” which not only is used differently by stakeholders but is also viewed more favorably by medium and big companies and organizations than by small ones. We conclude by sketching out how different computational text analysis tools, could be combined to provide many helpful insights for both rulemakers and legal scholars.
Regulating New Tech: Problems, Pathways, and People
Cary Coglianese (2021)
2021
Literature
Artificial Intelligence and new technologies regulation
New technologies bring with them many promises, but also a series of new problems. Even though these problems are new, they are not unlike the types of problems that regulators have long addressed in other contexts. The lessons from regulation in the past can thus guide regulatory efforts today. Regulators must focus on understanding the problems they seek to address and the causal pathways that lead to these problems. Then they must undertake efforts to shape the behavior of those in industry so that private sector managers focus on their technologies’ problems and take actions to interrupt the causal pathways. This means that regulatory organizations need to strengthen their own technological capacities; however, they need most of all to build their human capital. Successful regulation of technological innovation rests with top quality people who possess the background and skills needed to understand new technologies and their problems.

