210 results on '"AI governance"'
Search Results
2. Editorial: Protecting privacy in neuroimaging analysis: balancing data sharing and privacy preservation.
- Author
-
Mehmood, Rashid, Lazar, Mariana, Liang, Xiaohui, Corchado, Juan M., and See, Simon
- Subjects
ARTIFICIAL intelligence ,FEDERATED learning ,DATA privacy ,MAGNETIC resonance imaging ,TECHNOLOGICAL innovations ,MEDICAL informatics ,SCALABILITY - Abstract
The editorial in Frontiers in Neuroinformatics discusses the importance of protecting privacy in neuroimaging analysis while balancing data sharing and privacy preservation. It highlights the challenges of handling sensitive data generated by techniques like MRI and the need for innovative methodologies to safeguard individual privacy. The research topic explores interdisciplinary solutions, such as federated learning and differential privacy, to advance the field ethically and legally. The contributions in the editorial showcase how AI-driven methodologies can enhance efficiency, maintain privacy, and promote transparency in neuroimaging research, emphasizing the importance of aligning technical advancements with ethical principles. [Extracted from the article]
- Published
- 2025
- Full Text
- View/download PDF
3. Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.
- Author
-
Blanchard, Alexander, Thomas, Christopher, and Taddeo, Mariarosaria
- Abstract
The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative choices and corresponding tradeoffs that are involved in specifying guidance for the implementation of AI ethics principles in the defence domain. These correspond to: the AI lifecycle model used; the scope of stakeholder involvement; the accountability goals chosen; the choice of auditing requirements; and the choice of mechanisms for transparency and traceability. We provide initial recommendations for navigating these tradeoffs and highlight the importance of a pro-ethical institutional culture. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing.
- Author
-
Alsaigh, Roba, Mehmood, Rashid, Katib, Iyad, Liang, Xiaohui, Alshanqiti, Abdullah, Corchado, Juan M., and See, Simon
- Subjects
DATA privacy ,FEDERATED learning ,ARTIFICIAL intelligence ,REWARD (Psychology) ,DATA libraries ,METADATA ,MEDICAL informatics ,TECHNOLOGICAL progress - Abstract
The article discusses the intersection of artificial intelligence (AI) and neuroscience in the field of neuroinformatics, emphasizing the need for robust governance frameworks that prioritize privacy and data sharing. It explores the state-of-the-art advancements in neuroinformatics, challenges such as data standardization and privacy, and evaluates AI governance regulations across regions like the EU, USA, UK, and China. The paper highlights the alignment, gaps, and challenges in harmonizing AI governance regulations with neuroinformatics practices, offering strategic recommendations for better integration and ethical advancement in research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
5. Towards a Human Rights-Based Approach to Ethical AI Governance in Europe.
- Author
-
Hogan, Linda and Lasek-Markey, Marta
- Subjects
- *
CIVIL rights , *VALUES (Ethics) , *ARTIFICIAL intelligence , *SUBSIDIARITY , *ETHICS - Abstract
As AI-driven solutions continue to revolutionise the tech industry, scholars have rightly cautioned about the risks of 'ethics washing'. In this paper, we make a case for adopting a human rights-based ethical framework for regulating AI. We argue that human rights frameworks can be regarded as the common denominator between law and ethics and have a crucial role to play in the ethics-based legal governance of AI. This article examines the extent to which human rights-based regulation has been achieved in the primary example of legislation regulating AI governance, i.e., the EU AI Act 2024/1689. While the AI Act has a firm commitment to protect human rights, which in the EU legal order have been given expression in the Charter of Fundamental Rights, we argue that this alone does not contain adequate guarantees for enforcing some of these rights. This is because issues such as EU competence and the principle of subsidiarity make the idea of protection of fundamental rights by the EU rather than national constitutions controversial. However, we argue that human rights-based, ethical regulation of AI in the EU could be achieved through contextualisation within a values-based framing. In this context, we explore what are termed 'European values', which are values on which the EU was founded, notably Article 2 TEU, and consider the extent to which these could provide an interpretative framework to support effective regulation of AI and avoid 'ethics washing'. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. LLMs beyond the lab: the ethics and epistemics of real-world AI research.
- Author
-
Mollen, Joost
- Abstract
Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To address this gap, this paper provides an analysis of real-world research with LLMs and generative AI, assessing both its epistemic value and ethical concerns such as the potential for interpersonal and societal research harms, the increased privatization of AI learning, and the unjust distribution of benefits and risks. This paper discusses these concerns alongside four moral principles influencing research ethics standards: non-maleficence, beneficence, respect for autonomy, and distributive justice. I argue that real-world AI research faces challenges in meeting these principles and that these challenges are exacerbated by absent or imperfect current ethical governance. Finally, I chart two distinct but compatible ways forward: through ethical compliance and regulation and through moral education and cultivation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Nullius in Explanans: an ethical risk assessment for explainable AI: Nullius in Explanans: L. Nannini et al.
- Author
-
Nannini, Luca, Huyskes, Diletta, Panai, Enrico, Pistilli, Giada, and Tartaro, Alessio
- Abstract
Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation of XAI systems. Furthermore, we address a broader range of contextual risks jeopardizing their security, accountability, reception alongside other cognitive, social, and ethical concerns of explanations. We advance a multi-layered risk assessment framework, where each layer advances strategies for practical intervention, management, and documentation of XAI systems within organizations. Recognizing the theoretical nature of the framework advanced, we discuss it in a conceptual case study. For the XAI community, our multifaceted investigation represents a path to practically address XAI risks while enriching our understanding of the ethical ramifications of incorporating XAI in decision-making processes. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. La prospettiva dell’intelligenza artificiale nel procedimento e nell’attività amministrativa
- Author
-
Ilde Forgione
- Subjects
intelligenza artificiale ,pubblica amministrazione ,ai governance ,discrezionalità ,ruolo degli algoritmi nel procedimento ,Law ,Cybernetics ,Q300-390 - Abstract
Il presente lavoro analizza l’impatto dell’utilizzo dei sistemi di intelligenza artificiale nell’esercizio delle funzioni pubbliche, con particolare attenzione alle conseguenze derivanti dalla delega di compiti complessi a tali strumenti, anche nell’ambito dei procedimenti discrezionali. In tale contesto, si riflette su come l’automazione decisionale possa influire sulla trasparenza, sul giusto procedimento e sulla responsabilità umana, elementi fondamentali per garantire la legittimità dell’azione amministrativa, nonché sul ruolo da attribuire all’AI. nel contributo verranno analizzate la normativa, nazionale ed europea in materia, nonché l’impostazione giurisprudenziale, che riflettono un’attenzione crescente verso la trasparenza e la responsabilità, aspetti considerati essenziali per l’adozione sicura ed efficace dell’AI nei contesti pubblici.
- Published
- 2024
- Full Text
- View/download PDF
9. Enhancing E-Government Services through State-of-the-Art, Modular, and Reproducible Architecture over Large Language Models.
- Author
-
Papageorgiou, George, Sarlis, Vangelis, Maragoudakis, Manolis, and Tjortjis, Christos
- Subjects
GENERATIVE artificial intelligence ,LANGUAGE models ,ARTIFICIAL intelligence ,COMPUTATIONAL linguistics ,ELECTRONIC data processing - Abstract
Integrating Large Language Models (LLMs) into e-government applications has the potential to improve public service delivery through advanced data processing and automation. This paper explores critical aspects of a modular and reproducible architecture based on Retrieval-Augmented Generation (RAG) for deploying LLM-based assistants within e-government systems. By examining current practices and challenges, we propose a framework ensuring that Artificial Intelligence (AI) systems are modular and reproducible, essential for maintaining scalability, transparency, and ethical standards. Our approach utilizing Haystack demonstrates a complete multi-agent Generative AI (GAI) virtual assistant that facilitates scalability and reproducibility by allowing individual components to be independently scaled. This research focuses on a comprehensive review of the existing literature and presents case study examples to demonstrate how such an architecture can enhance public service operations. This framework provides a valuable case study for researchers, policymakers, and practitioners interested in exploring the integration of advanced computational linguistics and LLMs into e-government services, although it could benefit from further empirical validation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Trustworthiness of voting advice applications in Europe.
- Author
-
Stockinger, Elisabeth, Maas, Jonne, Talvitie, Christofer, and Dignum, Virginia
- Abstract
Voting Advice Applications (VAAs) are interactive tools used to assist in one’s choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens’ trust and participation in democratic structures. However, there is no established ground truth for one’s electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. AI policymaking as drama
- Author
-
Alison Powell and Fenwick McKelvey
- Subjects
policy ,drama ,AI governance ,Canada ,United Kingdom ,critical policy studies ,Social sciences (General) ,H1-99 - Abstract
As two researchers faced with the prospect of still more knowledge mobilisation, and still more consultation, our manuscript critically reflects on strategies for engaging with consultations as critical questions in critical AI studies. Our intervention reflects on the often-ambivalent roles of researchers and ‘experts’ in the production, contestation, and transformation of consultations and the publicities therein concerning AI. Although ‘AI’ is increasingly becoming a marketing term, there are still substantive strategic efforts toward developing AI industries. These policy consultations do open opportunities for experts like the authors to contribute to public discourse and policy practice on AI. Regardless, in the process of negotiating and developing around these initiatives, a range of dominant publicities emerge, including inevitability and hype. We draw on our experiences contributing to AI policy-making processes in two Global North countries. Resurfacing long-standing critical questions about participation in policymaking, our manuscript reflects on the possibilities of critical scholarship faced with the uncertainty in the rhetoric of democracy and public engagement.
- Published
- 2024
- Full Text
- View/download PDF
12. Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing
- Author
-
Roba Alsaigh, Rashid Mehmood, Iyad Katib, Xiaohui Liang, Abdullah Alshanqiti, Juan M. Corchado, and Simon See
- Subjects
neuroinformatics ,privacy ,data sharing ,ethical AI ,AI governance ,regulatory frameworks ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2024
- Full Text
- View/download PDF
13. Why we need to be careful with LLMs in medicine
- Author
-
Jean-Christophe Bélisle-Pipon
- Subjects
artificial intelligence ,AI ethics ,LLM ,medicine ,AI regulation ,AI governance ,Medicine (General) ,R5-920 - Published
- 2024
- Full Text
- View/download PDF
14. AI metrics and policymaking: assumptions and challenges in the shaping of AI
- Author
-
Sioumalas-Christodoulou, Konstantinos and Tympas, Aristotle
- Published
- 2025
- Full Text
- View/download PDF
15. Gender Mainstreaming into African Artificial Intelligence Policies: Egypt, Rwanda and Mauritius as Case Studies
- Author
-
Ifeoma E Nwafor
- Subjects
gender mainstreaming ,artificial intelligence ,african artificial intelligence policies ,ai governance ,gender and ai ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
Bias, particularly gender bias, is common in artificial intelligence (AI) systems, leading to harmful impacts that reinforce existing negative gender stereotypes and prejudices. Although gender mainstreaming is topical and fashionable in written discourse, it is yet to be thoroughly implemented in practice. While the clamour for AI regulation is commonplace globally, most government policies on the topic do not adequately account for gender inequities. Africa, Egypt, Rwanda and Mauritius are at the forefront of AI policy development. By exploring these three countries as case studies, employing a feminist approach and using the African Union Strategy for Gender Equality & Women’s Empowerment for 2018–2028 as a methodological guide, this study undertakes a comparative analysis of the gender considerations in their policy approaches to AI. It was found that a disconnect exists between gender equality/responsiveness and the AI strategies of these countries, showing that gender has yet to be mainstreamed into these policies. The study provides key recommendations that offer an opportunity for African countries to be innovative leaders in AI governance by developing even more robust policies compared with Western AI policies that fail to adequately address gender.
- Published
- 2024
- Full Text
- View/download PDF
16. Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.
- Author
-
Buhmann, Alexander and Fieseler, Christian
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,DELIBERATIVE democracy - Abstract
Responsible innovation in artificial intelligence (AI) calls for public deliberation: well-informed "deep democratic" debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of actors from the AI industry within a deliberative system. We develop a new framework of responsibilities for AI innovation as well as a deliberative governance approach for enacting these responsibilities. In elucidating this approach, we show how actors from the AI industry can most effectively engage with experts and nonexperts in different social venues to facilitate well-informed judgments on opaque AI systems and thus effectuate their democratic governance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility.
- Author
-
Ridzuan, Nurhadhinah Nadiah, Masri, Masairol, Anshari, Muhammad, Fitriyani, Norma Latif, and Syafrudin, Muhammad
- Subjects
- *
ARTIFICIAL intelligence , *CRITICAL success factor , *DATA privacy , *BANKING industry , *CREDIT analysis - Abstract
This study examines the applications, benefits, challenges, and ethical considerations of artificial intelligence (AI) in the banking and finance sectors. It reviews current AI regulation and governance frameworks to provide insights for stakeholders navigating AI integration. A descriptive analysis based on a literature review of recent research is conducted, exploring AI applications, benefits, challenges, regulations, and relevant theories. This study identifies key trends and suggests future research directions. The major findings include an overview of AI applications, benefits, challenges, and ethical issues in the banking and finance industries. Recommendations are provided to address these challenges and ethical issues, along with examples of existing regulations and strategies for implementing AI governance frameworks within organizations. This paper highlights innovation, regulation, and ethical issues in relation to AI within the banking and finance sectors. Analyzes the previous literature, and suggests strategies for AI governance framework implementation and future research directions. Innovation in the applications of AI integrates with fintech, such as preventing financial crimes, credit risk assessment, customer service, and investment management. These applications improve decision making and enhance the customer experience, particularly in banks. Existing AI regulations and guidelines include those from Hong Kong SAR, the United States, China, the United Kingdom, the European Union, and Singapore. Challenges include data privacy and security, bias and fairness, accountability and transparency, and the skill gap. Therefore, implementing an AI governance framework requires rules and guidelines to address these issues. This paper makes recommendations for policymakers and suggests practical implications in reference to the ASEAN guidelines for AI development at the national and regional levels. Future research directions, a combination of extended UTAUT, change theory, and institutional theory, as well as the critical success factor, can fill the theoretical gap through mixed-method research. In terms of the population gap can be addressed by research undertaken in a nation where fintech services are projected to be less accepted, such as a developing or Islamic country. In summary, this study presents a novel approach using descriptive analysis, offering four main contributions that make this research novel: (1) the applications of AI in the banking and finance industries, (2) the benefits and challenges of AI adoption in these industries, (3) the current AI regulations and governance, and (4) the types of theories relevant for further research. The research findings are expected to contribute to policy and offer practical implications for fintech development in a country. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering.
- Author
-
Lu, Qinghua, Zhu, Liming, Xu, Xiwei, Whittle, Jon, Zowghi, Didar, and Jacquet, Aurelie
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *DEEP reinforcement learning , *REINFORCEMENT learning , *FAILURE mode & effects analysis , *CAPABILITY maturity model , *SOFTWARE engineering - Published
- 2024
- Full Text
- View/download PDF
19. Artificial intelligence governance: Ethical considerations and implications for social responsibility.
- Author
-
Camilleri, Mark Anthony
- Subjects
- *
ARTIFICIAL intelligence , *SOCIAL responsibility , *SOCIAL impact , *SOCIAL responsibility of business , *EXPERT systems , *CONSCIOUSNESS raising - Abstract
A number of articles are increasingly raising awareness on the different uses of artificial intelligence (AI) technologies for customers and businesses. Many authors discuss about their benefits and possible challenges. However, for the time being, there is still limited research focused on AI principles and regulatory guidelines for the developers of expert systems like machine learning (ML) and/or deep learning (DL) technologies. This research addresses this knowledge gap in the academic literature. The objectives of this contribution are threefold: (i) It describes AI governance frameworks that were put forward by technology conglomerates, policy makers and by intergovernmental organizations, (ii) It sheds light on the extant literature on 'AI governance' as well as on the intersection of 'AI' and 'corporate social responsibility' (CSR), (iii) It identifies key dimensions of AI governance, and elaborates about the promotion of accountability and transparency; explainability, interpretability and reproducibility; fairness and inclusiveness; privacy and safety of end users, as well as on the prevention of risks and of cyber security issues from AI systems. This research implies that all those who are involved in the research, development and maintenance of AI systems, have social and ethical responsibilities to bear toward their consumers as well as to other stakeholders in society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Gender Mainstreaming into African Artificial Intelligence Policies: Egypt, Rwanda and Mauritius as Case Studies.
- Author
-
Nwafor, Ifeoma E.
- Subjects
ARTIFICIAL intelligence ,GENDER stereotypes ,PREJUDICES ,GOVERNMENT policy ,FEMINISM - Abstract
Bias, particularly gender bias, is common in artificial intelligence (AI) systems, leading to harmful impacts that reinforce existing negative gender stereotypes and prejudices. Although gender mainstreaming is topical and fashionable in written discourse, it is yet to be thoroughly implemented in practice. While the clamour for AI regulation is commonplace globally, most government policies on the topic do not adequately account for gender inequities. In Africa, Egypt, Rwanda and Mauritius are at the forefront of AI policy development. By exploring these three countries as case studies, employing a feminist approach and using the African Union Strategy for Gender Equality & Women’s Empowerment for 2018–2028 as a methodological guide, this study undertakes a comparative analysis of the gender considerations in their policy approaches to AI. It found that a disconnect exists between gender equality/responsiveness and the AI strategies of these countries, showing that gender has yet to be mainstreamed into these policies. The study provides key recommendations that offer an opportunity for African countries to be innovative leaders in AI governance by developing even more robust policies compared with Western AI policies that fail to adequately address gender. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Public procurement of artificial intelligence systems: new risks and future proofing.
- Author
-
Hickok, Merve
- Subjects
- *
ARTIFICIAL intelligence , *GOVERNMENT purchasing , *DUE process of law , *PUBLIC sector , *DUE diligence - Abstract
Public entities around the world are increasingly deploying artificial intelligence (AI) and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems are deployed by the public sector at various administrative levels without robust due diligence, monitoring, or transparency. This paper critically maps out the challenges in procurement of AI systems by public entities and the long-term implications necessitating AI-specific procurement guidelines and processes. This dual-prong exploration includes the new complexities and risks introduced by AI systems, and the institutional capabilities impacting the decision-making process. AI-specific public procurement guidelines are urgently needed to protect fundamental rights and due process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. AI Modelling of Counterfactual Thinking for Judicial Reasoning and Governance of Law
- Author
-
Pereira, Luís Moniz, Santos, Francisco C., Lopes, António Barata, Casanovas, Pompeu, Series Editor, Sartor, Giovanni, Series Editor, Sousa Antunes, Henrique, editor, Freitas, Pedro Miguel, editor, Oliveira, Arlindo L., editor, Martins Pereira, Clara, editor, Vaz de Sequeira, Elsa, editor, and Barreto Xavier, Luís, editor
- Published
- 2024
- Full Text
- View/download PDF
23. Management Processes
- Author
-
Hirsch, Dennis, Bartley, Timothy, Chandrasekaran, Aravind, Norris, Davon, Parthasarathy, Srinivasan, Turner, Piers Norris, Hirsch, Dennis, Bartley, Timothy, Chandrasekaran, Aravind, Norris, Davon, Parthasarathy, Srinivasan, and Turner, Piers Norris
- Published
- 2024
- Full Text
- View/download PDF
24. Ethics dumping in artificial intelligence
- Author
-
Jean-Christophe Bélisle-Pipon and Gavin Victor
- Subjects
artificial intelligence ,AI ethics ,ethics dumping ,ethical guidelines ,accountability ,AI governance ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Artificial Intelligence (AI) systems encode not just statistical models and complex algorithms designed to process and analyze data, but also significant normative baggage. This ethical dimension, derived from the underlying code and training data, shapes the recommendations given, behaviors exhibited, and perceptions had by AI. These factors influence how AI is regulated, used, misused, and impacts end-users. The multifaceted nature of AI’s influence has sparked extensive discussions across disciplines like Science and Technology Studies (STS), Ethical, Legal and Social Implications (ELSI) studies, public policy analysis, and responsible innovation—underscoring the need to examine AI’s ethical ramifications. While the initial wave of AI ethics focused on articulating principles and guidelines, recent scholarship increasingly emphasizes the practical implementation of ethical principles, regulatory oversight, and mitigating unforeseen negative consequences. Drawing from the concept of “ethics dumping” in research ethics, this paper argues that practices surrounding AI development and deployment can, unduly and in a very concerning way, offload ethical responsibilities from developers and regulators to ill-equipped users and host environments. Four key trends illustrating such ethics dumping are identified: (1) AI developers embedding ethics through coded value assumptions, (2) AI ethics guidelines promoting broad or unactionable principles disconnected from local contexts, (3) institutions implementing AI systems without evaluating ethical implications, and (4) decision-makers enacting ethical governance frameworks disconnected from practice. Mitigating AI ethics dumping requires empowering users, fostering stakeholder engagement in norm-setting, harmonizing ethical guidelines while allowing flexibility for local variation, and establishing clear accountability mechanisms across the AI ecosystem.
- Published
- 2024
- Full Text
- View/download PDF
25. Contesting the public interest in AI governance
- Author
-
Tegan Cohen and Nicolas P. Suzor
- Subjects
Artificial intelligence ,AI governance ,Democracy ,Public contestability ,Public interest ,Cybernetics ,Q300-390 ,Information theory ,Q350-390 - Abstract
This article argues that public contestability is a critical attribute of governance arrangements designed to align AI deployment with the public interest. Mechanisms to collectively contest decisions which do not track public interests are an important guardrail against erroneous, exclusionary, and arbitrary decision-making. On that basis, we suggest that efforts to align AI to the public interest through democratic participation will benefit substantially from strengthening capabilities for public contestation outside aggregative and deliberative processes. We draw on insights from democratic and regulatory theory to explore three underlying requirements for public contestability in AI governance: (1) capabilities to organise; (2) separation of powers; and (3) access to alternative and independent information. While recognising that suitable mechanisms for contestability will vary by system and context, we sketch out some possibilities for embedding public contestability in AI governance frameworks with a view to provoking further discussion on institutional design.
- Published
- 2024
- Full Text
- View/download PDF
26. THE IMPLICATIONS OF THE EU AI ACT ON CONVERSATIONAL TECHNOLOGIES LIKE CHATGPT
- Author
-
Emilian MATEICIUC
- Subjects
eu ai act ,conversational ai ,chatgpt ,ai ethics ,ai governance ,Social sciences (General) ,H1-99 - Abstract
This paper investigates the implications of the European Union's Artificial Intelligence Act (AI Act), on conversational AI technologies. As the EU institutes a groundbreaking framework for AI regulation, this study assesses how the AI Act's risk-oriented approach impacts the crafting, deployment, and oversight of conversational AI. The analysis explores the Act's system classification, high-risk AI categorization, and delineation of duties for AI developers and deployers, examining effects on innovation, privacy, and ethical considerations within conversational AI. The significance of this research lies in its exploration of the EU AI Act's effort to balance technological progression with the safeguarding of fundamental rights and user privacy. By examining the AI Act provisions specific to conversational AI technologies like ChatGPT, this paper highlights the challenges and opportunities within the legislative framework. It addresses key regulatory concerns including data protection, algorithmic transparency, and accountability, evaluating the Act's role as a potential standard for AI legislation globally. Situated within the extensive debate on AI regulation and ethics, this contribution is timely, offering insights into how legislative bodies can adapt to and influence the rapid development of AI technologies. This analysis seeks to guide policymakers, developers, and the academic sphere in navigating the complexities of conversational AI regulation, proposing strategies to align AI technology's growth with societal values and legal frameworks.
- Published
- 2024
27. Towards trustworthy medical AI ecosystems – a proposal for supporting responsible innovation practices in AI-based medical innovation
- Author
-
Herzog, Christian, Blank, Sabrina, and Stahl, Bernd Carsten
- Published
- 2024
- Full Text
- View/download PDF
28. International governance of advancing artificial intelligence
- Author
-
Emery-Xu, Nicholas, Jordan, Richard, and Trager, Robert
- Published
- 2024
- Full Text
- View/download PDF
29. An AI ethics 'David and Goliath': value conflicts between large tech companies and their employees.
- Author
-
Ryan, Mark, Christodoulou, Eleni, Antoniou, Josephina, and Iordanou, Kalypso
- Subjects
- *
HIGH technology industries , *VALUES (Ethics) , *BUSINESS ethics , *ARTIFICIAL intelligence , *NETWORK governance , *ETHICS - Abstract
Artificial intelligence ethics requires a united approach from policymakers, AI companies, and individuals, in the development, deployment, and use of these technologies. However, sometimes discussions can become fragmented because of the different levels of governance (Schmitt in AI Ethics 1–12, 2021) or because of different values, stakeholders, and actors involved (Ryan and Stahl in J Inf Commun Ethics Soc 19:61–86, 2021). Recently, these conflicts became very visible, with such examples as the dismissal of AI ethics researcher Dr. Timnit Gebru from Google and the resignation of whistle-blower Frances Haugen from Facebook. Underpinning each debacle was a conflict between the organisation's economic and business interests and the morals of their employees. This paper will examine tensions between the ethics of AI organisations and the values of their employees, by providing an exploration of the AI ethics literature in this area, and a qualitative analysis of three workshops with AI developers and practitioners. Common ethical and social tensions (such as power asymmetries, mistrust, societal risks, harms, and lack of transparency) will be discussed, along with proposals on how to avoid or reduce these conflicts in practice (e.g., building trust, fair allocation of responsibility, protecting employees' autonomy, and encouraging ethical training and practice). Altogether, we suggest the following steps to help reduce ethical issues within AI organisations: improved and diverse ethics education and training within businesses; internal and external ethics auditing; the establishment of AI ethics ombudsmen, AI ethics review committees and an AI ethics watchdog; as well as access to trustworthy AI ethics whistle-blower organisations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. More Capable, Less Benevolent: Trust Perceptions of AI Systems across Societal Contexts.
- Author
-
Novozhilova, Ekaterina, Mays, Kate, Paik, Sejin, and Katz, James E.
- Subjects
TRUST ,ARTIFICIAL intelligence ,GENERATIVE artificial intelligence ,PUBLIC opinion ,RISK perception ,BENEVOLENCE - Abstract
Modern AI applications have caused broad societal implications across key public domains. While previous research primarily focuses on individual user perspectives regarding AI systems, this study expands our understanding to encompass general public perceptions. Through a survey (N = 1506), we examined public trust across various tasks within education, healthcare, and creative arts domains. The results show that participants vary in their trust across domains. Notably, AI systems' abilities were evaluated higher than their benevolence across all domains. Demographic traits had less influence on trust in AI abilities and benevolence compared to technology-related factors. Specifically, participants with greater technological competence, AI familiarity, and knowledge viewed AI as more capable in all domains. These participants also perceived greater systems' benevolence in healthcare and creative arts but not in education. We discuss the importance of considering public trust and its determinants in AI adoption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. General-purpose AI regulation and the European Union AI Act
- Author
-
Oskar J. Gstrein, Noman Haleem, and Andrej Zwitter
- Subjects
Artificial intelligence ,General-purpose AI ,AI Act ,European Union ,AI governance ,Cybernetics ,Q300-390 ,Information theory ,Q350-390 - Abstract
This article provides an initial analysis of the EU AI Act's (AIA) approach to regulating general-purpose artificial intelligence (AI) – such as OpenAI's ChatGPT – and argues that it marks a significant shift from reactive to proactive AI governance. While this may alleviate concerns that regulators are constantly lagging behind technological developments, complex questions remain about the enforceability, democratic legitimacy, and future-proofing of the AIA. We present an interdisciplinary analysis of the relevant technological and legislative developments that ultimately led to the hybrid regulation that the AIA has become: a framework largely focused on product safety and standardisation with some elements related to the protection of fundamental rights. We analyse and discuss the legal requirements and obligations for the development and use of general-purpose AI and present the envisaged enforcement and penalty structure for the (un)lawful use of general-purpose AI in the EU. In conclusion, we argue that the AIA has significant potential to become a global benchmark for governance and regulation in this area of strategic global importance. However, its success hinges on effective enforcement, fruitful intra-European and international cooperation, and the EU's ability to adapt to the rapidly evolving AI landscape.
- Published
- 2024
- Full Text
- View/download PDF
32. Governing AI in Southeast Asia: ASEAN’s way forward
- Author
-
Bama Andika Putra
- Subjects
ASEAN ,Southeast Asia ,artificial intelligence ,AI governance ,governance ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic’s importance in ASEAN’s intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.
- Published
- 2024
- Full Text
- View/download PDF
33. THE IMPLICATIONS OF THE EU AI ACT ON CONVERSATIONAL TECHNOLOGIES LIKE CHATGPT.
- Author
-
MATEICIUC, Emilian
- Subjects
ARTIFICIAL intelligence ,DEPLOYMENT (Military strategy) ,DATA privacy ,CHATGPT - Abstract
This paper investigates the implications of the European Union's Artificial Intelligence Act (AI Act), on conversational AI technologies. As the EU institutes a groundbreaking framework for AI regulation, this study assesses how the AI Act's risk-oriented approach impacts the crafting, deployment, and oversight of conversational AI. The analysis explores the Act's system classification, high-risk AI categorization, and delineation of duties for AI developers and deployers, examining effects on innovation, privacy, and ethical considerations within conversational AI. The significance of this research lies in its exploration of the EU AI Act's effort to balance technological progression with the safeguarding of fundamental rights and user privacy. By examining the AI Act provisions specific to conversational AI technologies like ChatGPT, this paper highlights the challenges and opportunities within the legislative framework. It addresses key regulatory concerns including data protection, algorithmic transparency, and accountability, evaluating the Act's role as a potential standard for AI legislation globally. Situated within the extensive debate on AI regulation and ethics, this contribution is timely, offering insights into how legislative bodies can adapt to and influence the rapid development of AI technologies. This analysis seeks to guide policymakers, developers, and the academic sphere in navigating the complexities of conversational AI regulation, proposing strategies to align AI technology's growth with societal values and legal frameworks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
34. More Capable, Less Benevolent: Trust Perceptions of AI Systems across Societal Contexts
- Author
-
Ekaterina Novozhilova, Kate Mays, Sejin Paik, and James E. Katz
- Subjects
artificial intelligence ,trust ,survey ,generative AI ,AI ethics ,AI governance ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
Modern AI applications have caused broad societal implications across key public domains. While previous research primarily focuses on individual user perspectives regarding AI systems, this study expands our understanding to encompass general public perceptions. Through a survey (N = 1506), we examined public trust across various tasks within education, healthcare, and creative arts domains. The results show that participants vary in their trust across domains. Notably, AI systems’ abilities were evaluated higher than their benevolence across all domains. Demographic traits had less influence on trust in AI abilities and benevolence compared to technology-related factors. Specifically, participants with greater technological competence, AI familiarity, and knowledge viewed AI as more capable in all domains. These participants also perceived greater systems’ benevolence in healthcare and creative arts but not in education. We discuss the importance of considering public trust and its determinants in AI adoption.
- Published
- 2024
- Full Text
- View/download PDF
35. Designing an ML Auditing Criteria Catalog as Starting Point for the Development of a Framework
- Author
-
Markus Schwarz, Ludwig Christian Hinske, Ulrich Mansmann, and Fady Albashiti
- Subjects
AAI ,AI auditing ,auditable AI ,AI governance ,ML auditing core criteria catalog ,AI auditing framework ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Although AI algorithms and applications become more and popular in the healthcare sector, only few institutions have an operational AI strategy. Identifying the best suited processes for ML algorithm implementation and adoption is a big challenge. Also, raising human confidence in AI systems is elementary to building trustworthy, socially beneficial and responsible AI. A commonly agreed AI auditing framework that provides best practices and tools could help speeding up the adoption process. In this paper, we first highlight important concepts in the field of AI auditing and then restructure and subsume them into an ML auditing core criteria catalog. We conducted a scoping study where we analyzed sources being associated with the term “Auditable AI” in a qualitative way. We utilized best practices from Mayring (2000), Miles and Huberman (1994), and Bortz and Döring (2006). Based on referrals, additional relevant white papers and sources in the field of AI auditing were also included. The literature base was compared using inductively constructed categories. Afterwards, the findings were reflected on and synthesized into a resulting ML auditing core criteria catalog. The catalog is grouped into the categories: Conceptual Basics, Data & Algorithm Design and Assessment Metrics. As a practical guide, it consists of 30 questions developed to cover the mentioned categories and to guide ML implementation teams. Our consensus-based ML auditing criteria catalog is intended as a starting point for the development of evaluation strategies by specific stakeholders. We believe it will be beneficial to healthcare organizations that have been or will start implementing ML algorithms. Not only to help them being prepared for any upcoming legally required audit activities, but also to create better, well-perceived and accepted products. Potential limitations could be overcome by utilizing the proposed catalog in practice on real use cases to expose gaps and to further improve the catalog. Thus, this paper is seen as a starting point towards the development of a framework, where essential technical components can be specified.
- Published
- 2024
- Full Text
- View/download PDF
36. Navigating and Addressing Public Concerns in AI: Insights From Social Media Analytics and Delphi
- Author
-
Mehrdad Maghsoudi, Amirmahdi Mohammadi, and Sajjad Habibipour
- Subjects
Artificial intelligence ,AI concerns ,social media analysis ,AI governance ,responsible AI ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The rapid advancement and integration of artificial intelligence (AI) in various domains of society have given rise to a complex landscape of public concerns. This research endeavors to systematically explore these concerns by employing a multi-stage methodology that combines large-scale social media data collection from Twitter and advanced text analytics. The study identifies seven distinct clusters of concerns, encompassing privacy and security, workforce displacement, existential risks, social and ethical implications, dependency on AI, misuse of AI, and lack of transparency. To further contextualize these findings, the Delphi method was employed to gather insights from AI ethics experts, providing a deeper understanding of the public’s apprehensions. The results underscore the critical need for addressing these concerns to foster public trust and acceptance of AI technologies. This comprehensive analysis offers valuable guidance for policymakers, AI developers, and stakeholders to navigate and mitigate the multifaceted issues associated with AI, ultimately contributing to more informed and responsible AI deployment. By addressing these public concerns, the study aims to pave the way for a more ethically sound and socially acceptable integration of AI into society, ensuring that the benefits of AI can be realized while minimizing potential risks and negative impacts. Through this systematic approach, the research highlights the importance of continuous monitoring and proactive management of AI-related concerns to sustain public confidence and promote beneficial AI innovation.
- Published
- 2024
- Full Text
- View/download PDF
37. A Formal Model for Integrating Consent Management Into MLOps
- Author
-
Neda Peyrone and Duangdao Wichadakul
- Subjects
GDPR ,AI act ,consent management ,event-B ,AI governance ,MLOps ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In the artificial intelligence (AI) era, data has become increasingly essential for learning and analysis. AI enables automated decision-making that may lead to violation of the General Data Protection Regulation (GDPR). The GDPR is the data protection law within the European Union (EU) that allows individuals (‘data subjects’) to control their personal data. According to the law, automated decision-making can be permitted where data subjects give explicit consent. Therefore, consent management (CM) has become an essential software component for managing data subjects’ data lifecycle and their consent. Bringing machine learning (ML) into production needs machine learning operations (MLOps). MLOps is a set of processes for delivering ML artifacts reliably and efficiently. However, current MLOps frameworks neglect the integration of CM into their processes, leading to the risk of GDPR violations. This research proposes a formal model for integrating CM into MLOps that takes upfront privacy by design (PbD). Finally, we provided a mapping from the formal model to a class diagram as guidelines to integrate CM into MLOps and demonstrated how to apply the proposed class diagram to existing ML developments, such as machine unlearning, in conjunction with the Purchase dataset.
- Published
- 2024
- Full Text
- View/download PDF
38. Enhancing E-Government Services through State-of-the-Art, Modular, and Reproducible Architecture over Large Language Models
- Author
-
George Papageorgiou, Vangelis Sarlis, Manolis Maragoudakis, and Christos Tjortjis
- Subjects
AI Governance ,e-government ,generative artificial intelligence (GAI) ,modularity ,large language models (LLMs) ,reproducibility ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Integrating Large Language Models (LLMs) into e-government applications has the potential to improve public service delivery through advanced data processing and automation. This paper explores critical aspects of a modular and reproducible architecture based on Retrieval-Augmented Generation (RAG) for deploying LLM-based assistants within e-government systems. By examining current practices and challenges, we propose a framework ensuring that Artificial Intelligence (AI) systems are modular and reproducible, essential for maintaining scalability, transparency, and ethical standards. Our approach utilizing Haystack demonstrates a complete multi-agent Generative AI (GAI) virtual assistant that facilitates scalability and reproducibility by allowing individual components to be independently scaled. This research focuses on a comprehensive review of the existing literature and presents case study examples to demonstrate how such an architecture can enhance public service operations. This framework provides a valuable case study for researchers, policymakers, and practitioners interested in exploring the integration of advanced computational linguistics and LLMs into e-government services, although it could benefit from further empirical validation.
- Published
- 2024
- Full Text
- View/download PDF
39. An Integrative Theoretical Framework for Responsible Artificial Intelligence.
- Author
-
Haidar, Ahmad
- Abstract
The rapid integration of Artificial Intelligence (AI) into various sectors has yielded significant benefits, such as enhanced business efficiency and customer satisfaction, while posing challenges, including privacy concerns, algorithmic bias, and threats to autonomy. In response to these multifaceted issues, this study proposes a novel integrative theoretical framework for Responsible AI (RAI), which addresses four key dimensions: technical, sustainable development, responsible innovation management, and legislation. The responsible innovation management and the legal dimensions form the foundational layers of the framework. The first embeds elements like anticipation and reflexivity into corporate culture, and the latter examines AI-specific laws from the European Union and the United States, providing a comparative perspective on legal frameworks governing AI. The study's findings may be helpful for businesses seeking to responsibly integrate AI, developers who focus on creating responsibly compliant AI, and policymakers looking to foster awareness and develop guidelines for RAI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Doing versus saying: responsible AI among large firms
- Author
-
Bughin, Jacques
- Published
- 2024
- Full Text
- View/download PDF
41. Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance.
- Author
-
Wei, Wenqi and Liu, Ling
- Published
- 2025
- Full Text
- View/download PDF
42. Exploring AI governance in the Middle East and North Africa (MENA) region: gaps, efforts, and initiatives
- Author
-
Hana Trigui, Fatma Guerfali, Emna Harigua-Souiai, Radwan Qasrawi, Chiraz Atri, Elie Salem Sokhn, Christo El Morr, Karima Hammami, Oussama Souiai, Jianhong Wu, Jude Dzevela Kong, Jean Jacques Rousseau, and Sadri Znaidi
- Subjects
Artificial Intelligence (AI) ,AI readiness ,AI governance ,Regulatory framework ,MENA Region ,Information technology ,T58.5-58.64 ,Political institutions and public administration (General) ,JF20-2112 - Abstract
This commentary explores MENA”s AI governance, addressing gaps, showcasing successful strategies, and comparing national approaches. It emphasizes current deficiencies, highlights regional contributions to global AI governance, and offers insights into effective frameworks. The study reveals distinctions and trends in MENA”s national AI strategies, serving as a concise resource for policymakers and industry stakeholders.
- Published
- 2024
- Full Text
- View/download PDF
43. A feminist framework for urban AI governance: addressing challenges for public–private partnerships
- Author
-
Laine McCrory
- Subjects
AI governance ,feminist technology ,sidewalk labs ,smart cities ,Information technology ,T58.5-58.64 ,Political institutions and public administration (General) ,JF20-2112 - Abstract
This analysis provides a critical account of AI governance in the modern “smart city” through a feminist lens. Evaluating the case of Sidewalk Labs’ Quayside project—a smart city development that was to be implemented in Toronto, Canada—it is argued that public–private partnerships can create harmful impacts when corporate actors seek to establish new “rules of the game” regarding data regulation. While the Quayside project was eventually abandoned in 2020, it demonstrates key observations for the state of urban algorithmic governance both within Canada and internationally. Articulating the need for a revitalised and participatory smart city governance programme prioritizes meaningful engagement in the forms of transparency and accountability measures. Taking a feminist lens, it argues for a two-pronged approach to governance: integrating collective engagement from the outset in the design process and ensuring the civilian data protection through a robust yet localized rights-based privacy regulation strategy. Engaging with feminist theories of intersectionality in relation to technology and data collection, this framework articulates the need to understand the broader histories of social marginalization when implementing governance strategies regarding artificial intelligence in cities.
- Published
- 2024
- Full Text
- View/download PDF
44. Identifying stakeholder motivations in normative AI governance: a systematic literature review for research guidance
- Author
-
Frederic Heymans and Rob Heyman
- Subjects
AI governance ,social constructivism ,social construction of technology ,stakeholder motivations ,stakeholders ,Information technology ,T58.5-58.64 ,Political institutions and public administration (General) ,JF20-2112 - Abstract
Ethical guidelines and policy documents destined to guide AI innovations have been heralded as the solution to guard us against harmful effects or to increase public value. However, these guidelines and policy documents face persistent challenges. While these documents are often criticized for their abstraction and disconnection from real-world contexts, it also occurs that stakeholders may influence them for political or strategic reasons. While this last issue is frequently acknowledged, there is seldom a means or a method provided to explore it. To address this gap, the paper employs a combination of social constructivism and science & technology studies perspectives, along with desk research, to investigate whether prior research has examined the influence of stakeholder interests, strategies, or agendas on guidelines and policy documents. The study contributes to the discourse on AI governance by proposing a theoretical framework and methodologies to better analyze this underexplored area, aiming to enhance comprehension of the policymaking process within the rapidly evolving AI landscape. The findings underscore the need for a critical evaluation of the methodologies found and a further exploration of their utility. In addition, the results aim to stimulate ongoing critical debates on this subject.
- Published
- 2024
- Full Text
- View/download PDF
45. A systematic review of artificial intelligence impact assessments.
- Author
-
Stahl, Bernd Carsten, Antoniou, Josephina, Bhalla, Nitika, Brooks, Laurence, Jansen, Philip, Lindqvist, Blerta, Kirichenko, Alexey, Marchal, Samuel, Rodrigues, Rowena, Santiago, Nicole, Warso, Zuzanna, and Wright, David
- Abstract
Artificial intelligence (AI) is producing highly beneficial impacts in many domains, from transport to healthcare, from energy distribution to marketing, but it also raises concerns about undesirable ethical and social consequences. AI impact assessments (AI-IAs) are a way of identifying positive and negative impacts early on to safeguard AI's benefits and avoid its downsides. This article describes the first systematic review of these AI-IAs. Working with a population of 181 documents, the authors identified 38 actual AI-IAs and subjected them to a rigorous qualitative analysis with regard to their purpose, scope, organisational context, expected issues, timeframe, process and methods, transparency and challenges. The review demonstrates some convergence between AI-IAs. It also shows that the field is not yet at the point of full agreement on content, structure and implementation. The article suggests that AI-IAs are best understood as means to stimulate reflection and discussion concerning the social and ethical consequences of AI ecosystems. Based on the analysis of existing AI-IAs, the authors describe a baseline process of implementing AI-IAs that can be implemented by AI developers and vendors and that can be used as a critical yardstick by regulators and external observers to evaluate organisations' approaches to AI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Ensuring the ethical development and use of AI in local governance.
- Author
-
Brand, Dirk J.
- Subjects
MORAL development ,ARTIFICIAL intelligence ,POLITICAL participation ,DIGITAL technology ,MUNICIPAL services ,CITIES & towns - Abstract
The rapid development and use of digital technology, including artificial intelligence (AI), have a significant impact on society in general, but specifically also on the public sector. In this data driven digital era the use of AI in government has become a crucial tool to help shape the future of public governance. Automated decision-making or algorithmic decision-making enables more informed and evidence-based decision-making in government. The variety of applications of AI is growing rapidly and spans all functional areas of government. AI based technologies create opportunities to transform the way in which local governments deliver municipal services. Also, in other spheres of government it enhances efficiency, and proactive and responsive decision-making to the benefit of citizens. It has huge potential and offers many benefits, but also includes many risks. This requires a responsible approach that will include mechanisms for safeguarding fundamental rights, inclusivity and a process of constant participation with citizens, and ensuring that the performance of AI systems is proportionate, supervised and reasoned. Responsible AI based on a set of key principles such as accountability, explainability and respect for privacy and other human rights should be the foundation for the use of AI in public governance. Various policy and legislative initiatives around the world aim to create clear, workable frameworks for the development and use of responsible AI. An example is the AI Act in the European Union which was approved by the European Parliament in June 2023, and which follows a risk-based approach that acknowledges the protection of human rights in the development and use of AI. Various local and regional policy and legal developments to support responsible AI at sub-national level are also in process. How can AI be used to improve public governance? What safeguards should be put in place to ensure ethical and responsible AI and the protection of human rights? How can AI be used to co-create public services? In presenting the paper, these critical questions will be considered. This paper explores the development of responsible and ethical AI policies and initiatives in public governance, with a specific focus on AI in cities. It discusses key initiatives such as the establishment of an algorithmic register in Amsterdam and Barcelona, as well as specific AI applications that enhances service delivery. It will conclude with some recommendations for building blocks in ensuring responsible AI in public governance in local government. [ABSTRACT FROM AUTHOR]
- Published
- 2023
47. Deconstructing public participation in the governance of facial recognition technologies in Canada
- Author
-
Jones, Maurice and McKelvey, Fenwick
- Published
- 2024
- Full Text
- View/download PDF
48. Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI
- Author
-
Sheir, Stephanie, Manzini, Arianna, Smith, Helen, and Ives, Jonathan
- Published
- 2024
- Full Text
- View/download PDF
49. Editorial: Artificial intelligence (AI) ethics in business
- Author
-
Alejo Sison, Ignacio Ferrero, Pablo García Ruiz, and Tae Wan Kim
- Subjects
artificial intelligence ,AI governance ,artificial intelligence ethics ,artificial intelligence in business ,algorithmic decision-making ,Psychology ,BF1-990 - Published
- 2023
- Full Text
- View/download PDF
50. Responsible AI in Africa
- Author
-
Eke, Damian Okaibedi, Wakunuma, Kutoma, and Akintoye, Simisola
- Subjects
Artificial Intelligence ,AI Ecosystems in Africa ,AI ethics ,digital culture ,SDGs ,Sustainable Development Goals ,AI policy ,ICT infrastructure in Africa ,AI governance ,Sociology ,Literature: history and criticism ,Artificial intelligence ,Human geography ,Ethics and moral philosophy - Abstract
This open access book contributes to the discourse of Responsible Artificial Intelligence (AI) from an African perspective. It is a unique collection that brings together prominent AI scholars to discuss AI ethics from theoretical and practical African perspectives and makes a case for African values, interests, expectations and principles to underpin the design, development and deployment (DDD) of AI in Africa. The book is a first in that it pays attention to the socio-cultural contexts of Responsible AI that is sensitive to African cultures and societies. It makes an important contribution to the global AI ethics discourse that often neglects AI narratives from Africa despite growing evidence of DDD in many domains. Nine original contributions provide useful insights to advance the understanding and implementation of Responsible AI in Africa, including discussions on epistemic injustice of global AI ethics, opportunities and challenges, an examination of AI co-bots and chatbots in an African work space, gender and AI, a consideration of African philosophies such as Ubuntu in the application of AI, African AI policy, and a look towards a future of Responsible AI in Africa. This is an open access book.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.