1. AN INSTITUTIONAL VIEW OF ALGORITHMIC IMPACT ASSESSMENTS
- Author
-
Selbst, Andrew D.
- Subjects
Artificial intelligence -- Research -- Usage ,Harm principle (Ethics) -- Environmental aspects -- Research -- Social aspects ,Bureaucracy -- Research ,Environmental impact analysis -- Methods -- Research -- Social aspects ,Algorithms -- Research -- Usage ,Algorithm ,Artificial intelligence ,High technology industry ,Law ,National Environmental Policy Act of 1969 - Abstract
Scholars and advocates have proposed algorithmic impact assessments ('AIAs') as a regulatory strategy for addressing and correcting algorithmic harms. An AIA-based regulatory framework would require the creator of an algorithmic system to assess its potential socially harmful impacts before implementation and create documentation that can be used later for accountability and future policy development. In practice, an impact assessment framework relies on the expertise and information to which only the creators of the project have access. It is therefore inevitable that technology firms will have an amount of practical discretion in the assessment, and willing cooperation from firms is necessary to make the regulation work. But a regime that relies on good-faith partnership from the private sector also has strong potential to be undermined by the incentives and institutional logics of the private sector. This Article argues that for AIA regulation to be effective, it must anticipate the ways that such regulation will be filtered through the private sector institutional environment. This Article combines insights from governance, organizational theory, and computer science to explore how future AIA regulations may be implemented on the ground. An AIA regulation has two main goals: (1) to require firms to consider social impacts early and work to mitigate them before development, and (2) to create documentation of decisions and testing that can support future policy-learning. The Article argues that institutional logics, such as liability avoidance and the profit motive, will render the first goal difficult to fully achieve in the short term because the practical discretion that firms have allows them room to undermine the AIA requirements. But AIAs can still be beneficial because the second goal does not require full compliance to be successful. Over time, there is also reason to believe that AIAs can be part of a broader cultural shift toward accountability within the technical industry. This will lead to greater buy-in and less need for enforcement of documentation requirements. Given the degree to which an AIA regulation will rely on good faith participation by regulated firms, AIAs must have synergy with how the field works rather than be in tension with it. For this reason, the Article argues that it is also crucial that regulators understand the technical industry itself, including the technology, the organizational culture, and emerging documentation standards. This Article demonstrates how emerging research within the field of algorithmic accountability can also inform the shape of AIA regulation. By looking at the different stages of development and so-called 'pause points,' regulators can know at which points firms can export information. Looking at AI ethics research can show what social impacts the field thinks are important and where it might miss issues that policymakers care about. Overall, understanding the industry can make the AIA documentation requirements themselves more legible to technology firms, easing the path for a future AIA mandate to be successful on the ground., TABLE OF CONTENTS I. INTRODUCTION 119 II. ALGORITHMIC HARMS AND LIABILITY REGIMES 127 A. The Discriminatory Hiring Algorithm 128 B. The Unexplained Loan Denial 132 C. The Unsafe Medical AI [...]
- Published
- 2021