389 results on '"AI governance"'
Search Results
2. How could the United Nations Global Digital Compact prevent cultural imposition and hermeneutical injustice?
- Author
-
Gwagwa, Arthur and Mollema, Warmhold Jan Thomas
- Published
- 2024
- Full Text
- View/download PDF
3. Assessing European Union Member States’ Implementation of the Artificial Intelligence Act
- Author
-
Costa, Helena, Mendonça, Joana, Zimmermann, Ricardo, editor, Rodrigues, José Coelho, editor, Simoes, Ana, editor, and Dalmarco, Gustavo, editor
- Published
- 2025
- Full Text
- View/download PDF
4. The Dynamics of AI Innovation Ecosystems: A Case Study of Greater Manchester
- Author
-
Jin, Na, Miles, Ian, Zimmermann, Ricardo, editor, Rodrigues, José Coelho, editor, Simoes, Ana, editor, and Dalmarco, Gustavo, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Safeguarding Knowledge: Ethical Artificial Intelligence Governance in the University Digital Transformation
- Author
-
Molina-Carmona, Rafael, García-Peñalvo, Francisco José, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Vendrell Vidal, Eduardo, editor, Cukierman, Uriel R., editor, and Auer, Michael E., editor
- Published
- 2025
- Full Text
- View/download PDF
6. 5 Steps for Enterprise Artificial Intelligence Governance and Compliance
- Author
-
Peng, Wenhua, Yu, Bingbing, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zhang, Yong, editor, Cai, Ting, editor, and Zhang, Liang-Jie, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Generative AI and WMD Nonproliferation: A Practical Primer for Policymakers and Diplomats.
- Author
-
Bajema, Natasha E.
- Subjects
ARTIFICIAL neural networks ,GENERATIVE artificial intelligence ,ARTIFICIAL intelligence ,LANGUAGE models ,TECHNOLOGICAL risk assessment - Abstract
This primer provides a comprehensive overview of generative artificial intelligence (AI) and its implications for weapons of mass destruction (WMD) nonproliferation. It addresses five key areas, beginning with fundamental AI concepts that explain the evolution from traditional AI to current generative AI systems. The primer distinguishes between predictive and generative AI models, emphasizing how the newest AI models, particularly large language models (LLMs), differ from previous technologies in their ability to generate novel content rather than simply making predictions. The document provides a detailed analysis of various generative AI architectures, including LLMs, diffusion models, and emerging world models. It outlines different training techniques (supervised, unsupervised, reinforcement learning) and explains how these systems are developed and improved through methods like fine-tuning and retrieval augmented generation (RAG). The primer then explores current applications of generative AI, from basic chatbot interactions to sophisticated agentic AI systems capable of autonomous action. It details how organizations can customize AI models for specific domains and discusses the emergence of AIenhanced search engines and workflow automation. Several critical challenges are identified, including design flaws (hallucinations, data biases, and copyright issues), implementation risks (data privacy, disinformation, and malicious use potential), and growth limitations (data shortages, energy constraints, and uncertain economic returns). The primer pays particular attention to specific WMD-related concerns, such as the potential for misuse in weapons development and proliferation. The document also outlines current and emerging regulatory approaches, including U.S. initiatives like Executive Order 14110, the European Union’s AI Act, global governance efforts through international organizations, and specific measures for addressing WMD-related risks. The primer concludes with practical guidance for policymakers and diplomats, including detailed instructions for using AI tools and frameworks for evaluating their potential benefits and risks in the nonproliferation domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
8. Editorial: Protecting privacy in neuroimaging analysis: balancing data sharing and privacy preservation.
- Author
-
Mehmood, Rashid, Lazar, Mariana, Liang, Xiaohui, Corchado, Juan M., and See, Simon
- Subjects
ARTIFICIAL intelligence ,FEDERATED learning ,DATA privacy ,MAGNETIC resonance imaging ,TECHNOLOGICAL innovations ,MEDICAL informatics ,SCALABILITY - Abstract
The editorial in Frontiers in Neuroinformatics discusses the importance of protecting privacy in neuroimaging analysis while balancing data sharing and privacy preservation. It highlights the challenges of handling sensitive data generated by techniques like MRI and the need for innovative methodologies to safeguard individual privacy. The research topic explores interdisciplinary solutions, such as federated learning and differential privacy, to advance the field ethically and legally. The contributions in the editorial showcase how AI-driven methodologies can enhance efficiency, maintain privacy, and promote transparency in neuroimaging research, emphasizing the importance of aligning technical advancements with ethical principles. [Extracted from the article]
- Published
- 2025
- Full Text
- View/download PDF
9. Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.
- Author
-
Blanchard, Alexander, Thomas, Christopher, and Taddeo, Mariarosaria
- Abstract
The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative choices and corresponding tradeoffs that are involved in specifying guidance for the implementation of AI ethics principles in the defence domain. These correspond to: the AI lifecycle model used; the scope of stakeholder involvement; the accountability goals chosen; the choice of auditing requirements; and the choice of mechanisms for transparency and traceability. We provide initial recommendations for navigating these tradeoffs and highlight the importance of a pro-ethical institutional culture. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing.
- Author
-
Alsaigh, Roba, Mehmood, Rashid, Katib, Iyad, Liang, Xiaohui, Alshanqiti, Abdullah, Corchado, Juan M., and See, Simon
- Subjects
DATA privacy ,FEDERATED learning ,ARTIFICIAL intelligence ,REWARD (Psychology) ,DATA libraries ,METADATA ,MEDICAL informatics ,TECHNOLOGICAL progress - Abstract
The article discusses the intersection of artificial intelligence (AI) and neuroscience in the field of neuroinformatics, emphasizing the need for robust governance frameworks that prioritize privacy and data sharing. It explores the state-of-the-art advancements in neuroinformatics, challenges such as data standardization and privacy, and evaluates AI governance regulations across regions like the EU, USA, UK, and China. The paper highlights the alignment, gaps, and challenges in harmonizing AI governance regulations with neuroinformatics practices, offering strategic recommendations for better integration and ethical advancement in research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
11. The role of ETSI in the EU's regulation and governance of artificial intelligence.
- Author
-
Gamito, Marta Cantero
- Subjects
- *
SYSTEM integration , *TELECOMMUNICATIONS standards , *DIGITAL technology , *LEGAL education , *ARTIFICIAL intelligence - Abstract
This paper explores the significant role that standardisation plays in the regulation and governance of artificial intelligence (AI) within the European Union (EU). As AI technologies rapidly advance, they bring about important societal implications involving privacy, fairness, transparency and other relevant ethical considerations. As a result, legislators and policymakers around the world are joined by a common drive to provide legislative solutions and regulatory frameworks that guarantee that the ongoing integration of AI systems into society is consistent with fundamental rights and democratic values. This paper critically examines the EU Regulation on AI (AI Act), which delegates the definition of essential requirements for high-risks AI systems to harmonised standards, underlying the significance of standardisation in ensuring technical feasibility and compliance with EU laws and values. At the forefront of this discussion there is the increasing influence of AI-related standardisation across social, economic, and geopolitical domains, with a particular focus on the crucial role played by Standard Developing Organisations (SDOs) in the regulatory and governance processes. This paper contributes to the legal scholarship by critically analysing the regulatory approach chosen for the EU's AI Act, contesting the adequacy of the New Legislative Framework (NLF) for AI governance, and arguing that the reliance on harmonised standards risks undermining democratic accountability and fails to sufficiently safeguard fundamental rights without a more inclusive and transparent standard-setting process. The article focuses on the exclusion of the European Telecommunications Standards Institute (ETSI) from the European Commissions standardisation request in support of the AI Act, and asseses its potential impact on EU law-making and regulatory consistency. Ultimately, the analysis aims to contribute to the understanding of standardisation dynamics, offering insights into its profound implications for AI governance and the broader digital sphere. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Towards a Human Rights-Based Approach to Ethical AI Governance in Europe.
- Author
-
Hogan, Linda and Lasek-Markey, Marta
- Subjects
- *
CIVIL rights , *VALUES (Ethics) , *ARTIFICIAL intelligence , *SUBSIDIARITY , *ETHICS - Abstract
As AI-driven solutions continue to revolutionise the tech industry, scholars have rightly cautioned about the risks of 'ethics washing'. In this paper, we make a case for adopting a human rights-based ethical framework for regulating AI. We argue that human rights frameworks can be regarded as the common denominator between law and ethics and have a crucial role to play in the ethics-based legal governance of AI. This article examines the extent to which human rights-based regulation has been achieved in the primary example of legislation regulating AI governance, i.e., the EU AI Act 2024/1689. While the AI Act has a firm commitment to protect human rights, which in the EU legal order have been given expression in the Charter of Fundamental Rights, we argue that this alone does not contain adequate guarantees for enforcing some of these rights. This is because issues such as EU competence and the principle of subsidiarity make the idea of protection of fundamental rights by the EU rather than national constitutions controversial. However, we argue that human rights-based, ethical regulation of AI in the EU could be achieved through contextualisation within a values-based framing. In this context, we explore what are termed 'European values', which are values on which the EU was founded, notably Article 2 TEU, and consider the extent to which these could provide an interpretative framework to support effective regulation of AI and avoid 'ethics washing'. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. AI Integration and Economic Divides: Analyzing Global AI Strategies.
- Author
-
Gualandri, Fabio and Kuzior, Aleksandra
- Subjects
ARTIFICIAL intelligence ,SOCIOECONOMICS ,MANUFACTURING industries ,INDUSTRIAL safety ,GROSS domestic product - Abstract
This study investigates the impact of socio-economic factors on national AI strategies in India, Bangladesh, Germany, UAE, Egypt, and the USA through quantitative content analysis. The analysis explores the correlation between GDP per capita, the share of manufacturing, and the frequency of risk-related terms in AI strategy documents. It is found that wealthier nations emphasize AI risks more, correlating with deeper technological integration into their societal structures. Conversely, the emphasis on AI risks shows a weak correlation with the share of manufacturing, indicating broader AI impacts in service-oriented sectors. Lower-middle-income countries appear more optimistic, focusing on AI's economic benefits. The study underscores the need for balanced AI strategies that promote innovation while ensuring worker well-being, advocating for adaptive governance frameworks that enhance workplace safety and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. United States ∙ AI Law Developments: A Late 2024 Update.
- Author
-
Mustafa, Alisar
- Subjects
GENERATIVE artificial intelligence ,LANGUAGE models ,CHILD sexual abuse laws ,CHILD pornography ,SOCIAL media ,MINORS - Abstract
The document "United States AI Law Developments: A Late 2024 Update" discusses the current state of AI regulation in the United States. With over 120 AI-related bills under consideration in Congress, federal inaction has led to states taking the lead in crafting AI-specific laws, often following the EU's risk-based approach. The report highlights examples of AI causing harm in high-risk sectors and emphasizes the urgent need for comprehensive regulations to ensure ethical AI deployment. Various federal agencies have taken steps to shape AI governance through enforcement actions, guidelines, and voluntary standards, but the lack of comprehensive federal legislation has created a fragmented regulatory framework. States like California, Colorado, New York, Texas, Pennsylvania, and Utah have implemented their own AI regulations, addressing issues such as election integrity, privacy, healthcare, education, and workforce development. The document concludes by stressing the importance of coordination between federal and state approaches, along with public input, to establish a coherent regulatory framework for AI in the US. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
15. LLMs beyond the lab: the ethics and epistemics of real-world AI research.
- Author
-
Mollen, Joost
- Abstract
Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To address this gap, this paper provides an analysis of real-world research with LLMs and generative AI, assessing both its epistemic value and ethical concerns such as the potential for interpersonal and societal research harms, the increased privatization of AI learning, and the unjust distribution of benefits and risks. This paper discusses these concerns alongside four moral principles influencing research ethics standards: non-maleficence, beneficence, respect for autonomy, and distributive justice. I argue that real-world AI research faces challenges in meeting these principles and that these challenges are exacerbated by absent or imperfect current ethical governance. Finally, I chart two distinct but compatible ways forward: through ethical compliance and regulation and through moral education and cultivation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Nullius in Explanans: an ethical risk assessment for explainable AI: Nullius in Explanans: L. Nannini et al.
- Author
-
Nannini, Luca, Huyskes, Diletta, Panai, Enrico, Pistilli, Giada, and Tartaro, Alessio
- Abstract
Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation of XAI systems. Furthermore, we address a broader range of contextual risks jeopardizing their security, accountability, reception alongside other cognitive, social, and ethical concerns of explanations. We advance a multi-layered risk assessment framework, where each layer advances strategies for practical intervention, management, and documentation of XAI systems within organizations. Recognizing the theoretical nature of the framework advanced, we discuss it in a conceptual case study. For the XAI community, our multifaceted investigation represents a path to practically address XAI risks while enriching our understanding of the ethical ramifications of incorporating XAI in decision-making processes. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
17. La prospettiva dell’intelligenza artificiale nel procedimento e nell’attività amministrativa
- Author
-
Ilde Forgione
- Subjects
intelligenza artificiale ,pubblica amministrazione ,ai governance ,discrezionalità ,ruolo degli algoritmi nel procedimento ,Law ,Cybernetics ,Q300-390 - Abstract
Il presente lavoro analizza l’impatto dell’utilizzo dei sistemi di intelligenza artificiale nell’esercizio delle funzioni pubbliche, con particolare attenzione alle conseguenze derivanti dalla delega di compiti complessi a tali strumenti, anche nell’ambito dei procedimenti discrezionali. In tale contesto, si riflette su come l’automazione decisionale possa influire sulla trasparenza, sul giusto procedimento e sulla responsabilità umana, elementi fondamentali per garantire la legittimità dell’azione amministrativa, nonché sul ruolo da attribuire all’AI. nel contributo verranno analizzate la normativa, nazionale ed europea in materia, nonché l’impostazione giurisprudenziale, che riflettono un’attenzione crescente verso la trasparenza e la responsabilità, aspetti considerati essenziali per l’adozione sicura ed efficace dell’AI nei contesti pubblici.
- Published
- 2024
- Full Text
- View/download PDF
18. Belgium ∙ AI under the Microscope of the GDPR: The Belgian Data Protection Authority Deciphers the Challenges of Data Protection in the Development and Use of AI.
- Author
-
Dubuisson, Thomas
- Subjects
DATA protection ,ARTIFICIAL intelligence ,GENERAL Data Protection Regulation, 2016 ,LEGAL professions ,DATA privacy - Abstract
The article discusses the Belgian Data Protection Authority's (BDPA) practical guide on how Artificial Intelligence (AI) systems intersect with the General Data Protection Regulation (GDPR), addressing the challenges of data protection in AI development and usage. Topics discussed include the guide's role for legal professionals and developers, the definition and functionality of AI systems, and the BDPA's perspective on ensuring GDPR compliance in AI applications.
- Published
- 2024
- Full Text
- View/download PDF
19. Enhancing E-Government Services through State-of-the-Art, Modular, and Reproducible Architecture over Large Language Models.
- Author
-
Papageorgiou, George, Sarlis, Vangelis, Maragoudakis, Manolis, and Tjortjis, Christos
- Subjects
GENERATIVE artificial intelligence ,LANGUAGE models ,ARTIFICIAL intelligence ,COMPUTATIONAL linguistics ,ELECTRONIC data processing - Abstract
Integrating Large Language Models (LLMs) into e-government applications has the potential to improve public service delivery through advanced data processing and automation. This paper explores critical aspects of a modular and reproducible architecture based on Retrieval-Augmented Generation (RAG) for deploying LLM-based assistants within e-government systems. By examining current practices and challenges, we propose a framework ensuring that Artificial Intelligence (AI) systems are modular and reproducible, essential for maintaining scalability, transparency, and ethical standards. Our approach utilizing Haystack demonstrates a complete multi-agent Generative AI (GAI) virtual assistant that facilitates scalability and reproducibility by allowing individual components to be independently scaled. This research focuses on a comprehensive review of the existing literature and presents case study examples to demonstrate how such an architecture can enhance public service operations. This framework provides a valuable case study for researchers, policymakers, and practitioners interested in exploring the integration of advanced computational linguistics and LLMs into e-government services, although it could benefit from further empirical validation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Human Autonomy at Risk? An Analysis of the Challenges from AI.
- Author
-
Prunkl, Carina
- Abstract
Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first part, I distinguish between distinct concerns that are currently addressed under the umbrella term ‘human autonomy’. In particular, I show how differentiating between autonomy-as-authenticity and autonomy-as-agency helps us to pinpoint separate challenges from AI deployment. Some of these challenges are already well-known (e.g. online manipulation or limitation of freedom), whereas others have received much less attention (e.g. adaptive preference formation). In the second part, I address the different roles AI systems can assume in the context of autonomy. In particular, I differentiate between AI systems taking on agential roles and AI systems being used as tools. I conclude that while there is no ‘silver bullet’ to address concerns about human autonomy, considering its various dimensions can help us to systematically address the associated risks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Unravelling Copyright Dilemma of AI-Generated News and Its Implications for the Institution of Journalism: The Cases of US, EU, and China.
- Author
-
Kuai, Joanne
- Subjects
- *
COPYRIGHT , *COPYRIGHT lawsuits , *INTELLECTUAL property , *ARTIFICIAL intelligence , *DEINSTITUTIONALIZATION - Abstract
This study adopts a multiple-case study design to address 'Does copyright law protect automated news, and if so, how' in three jurisdictions: the United States, the European Union and China. Through doctrinal legal analysis of the copyright laws and document analysis of policy reports, corporate responses and other empirical evidence, this study has found that the three copyright regimes differ substantively with regard to both formal texts and informal enforcement of copyright claims to artificial intelligence (AI)-generated news. In the United States, there has been a policy silence. In the European Union (EU), eager regulators have rushed to enact premature laws and failed policy patchwork. In China, the state is instrumentalising both laws and journalism to further its own interests. These findings suggest that current regulatory frameworks in all cases have led to a weakening of the institution of copyright, which, in turn, has contributed to the deinstitutionalisation of journalism and the institutionalisation of algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Trustworthiness of voting advice applications in Europe.
- Author
-
Stockinger, Elisabeth, Maas, Jonne, Talvitie, Christofer, and Dignum, Virginia
- Abstract
Voting Advice Applications (VAAs) are interactive tools used to assist in one’s choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens’ trust and participation in democratic structures. However, there is no established ground truth for one’s electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. The pursuit of 'good' Internet policy.
- Author
-
Gray, Joanne E., Hutchinson, Jonathon, Stilinovic, Milica, and Tjahja, Nadia
- Subjects
INTERNET content moderation ,CONSPIRACY theories ,POLITICAL participation ,JUSTICE ,GOVERNMENT agencies ,NETWORK governance - Abstract
Copyright of Policy & Internet is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
24. Navigating the Intersection of AI Governance and EU Competition Law: A Critical Analysis.
- Author
-
Rohr, Shazana
- Subjects
ANTITRUST law ,LAW enforcement ,ARTIFICIAL intelligence ,EUROPEAN Union law ,CRITICAL analysis - Abstract
Artificial Intelligence (AI) poses novel challenges for competition law, as it can both promote and undermine competition. The complexity of AI technologies, coupled with their potential to disrupt markets, has prompted competition authorities particularly in the EU and the US to strengthen cooperation in addressing these issues and reevaluate their current instruments. This paper explores the interplay between the recently published EU AI Act and competition law, with a focus on key provisions relevant to competition law enforcement. It begins by outlining the structure of the AI Act, highlighting sections particularly relevant to competition law. It next evaluates how effectively EU competition law can tackle anticompetitive practices involving AI-driven algorithms, with particular attention to killer acquisitions and tacit collusion. Additionally, it explores how the AI Act impacts the information-gathering capabilities of competition authorities in the EU. By identifying enforcement gaps and exploring ongoing challenges, this paper sheds a light on the evolving role of competition law in an AI-driven market landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. AI governance in Asia: policies, praxis and approaches.
- Author
-
Xu, Jian, Lee, Terence, and Goggin, Gerard
- Subjects
INTERNET governance ,INTERNATIONAL competition ,PRAXIS (Process) ,ARTIFICIAL intelligence ,POLICY analysis - Abstract
This article surveys the status quo of AI readiness and governance in Asia and identify Asian approaches of doing AI regulation and governance through policy and document analysis. We note that some Asian countries are moving from 'soft regulation' through strategies and guidelines to 'hard regulation' through rule-setting and laws on AI. We argue that their AI governance approaches are greatly influenced by the existing internet governance frameworks and suggest the importance of historical understandings of Internet, telecommunications, and digital technology and governance to identify the connections and influences – and 'path dependency' of past policies upon their current AI strategies and governance. We anticipate that the AI regulatory landscape in Asia will become a diverse and contentious space due to global AI competition among the EU, China and the U.S. as well as the pragmatic paths that many Asian countries may take considering their own histories, economies and politics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. AI governance in India – law, policy and political economy.
- Author
-
Joshi, Divij
- Subjects
ARTIFICIAL intelligence ,INFRASTRUCTURE (Economics) ,MARKET design & structure (Economics) ,BIG data ,DATA analysis - Abstract
Artificial Intelligence technologies have elicited a range of policy responses in India, particularly as the Government of India attempts to position and project the country as a global leader in the production of AI technologies. Policy responses have ranged from providing public infrastructure to enable market-led AI production, to nationalising datasets in an effort to enable Big Data analysis through AI. This paper examines the recent history of AI policy in India from a critical political economy perspective, and argues that AI policy and governance in India constructs and legitimises a globally-dominant paradigm of informational capitalism, based on the construction of data as a productive resource for an information-based economic production, and encouraging self-regulation of harmful impacts by firms, even as it attempts to secure a strong hand for the state to determine, both through law and infrastructure, how such a market is structured and to what ends. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. From Robodebt to responsible AI: sociotechnical imaginaries of AI in Australia.
- Author
-
Kao, Kai-Ti
- Subjects
SOCIAL impact ,SOCIAL responsibility ,ARTIFICIAL intelligence - Abstract
This paper examines Australia's recent AI governance efforts through the lens of sociotechnical imaginaries. Using the example of Robodebt, it demonstrates how a more holistic and contextual examination of AI governance can help shed light on the social impacts and responsibilities associated with AI technologies. It argues that, despite the recent discursive shift to 'safe and responsible AI', a sociotechnical imaginary of AI as 'economic good' has been a persistent undercurrent in the past two governments' efforts at AI governance. Understanding how such sociotechnical imaginaries are embedded in AI governance can help us better predict how these governance efforts will impact society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Artificial intelligence ambitions and regulatory pathways: Vietnam's strategy in the regional and global AI landscape.
- Author
-
Than, Nga and Liu, Larry
- Subjects
DIGITAL transformation ,ARTIFICIAL intelligence ,VENTURE capital ,INTELLECTUAL property ,PUBLIC universities & colleges - Abstract
The Vietnamese government announced the National Digital Transformation Programme by 2025 to invest in Artificial Intelligence (AI) development, to become one of the top players in Southeast Asia by 2030, challenging frontrunners like Indonesia and Singapore. Through educational policies such as establishing AI and data science majors in public universities, the country has started to nurture a domestic talent base. Research labs both within university systems as well as at private industry labs such as VinAI and FPT have recruited Vietnamese nationals as well as international researchers to bolster AI research that puts Vietnam on the global map of AI development. However, the country's pursuit is facing obstacles. Issues involve an AI talent shortage, weak intellectual property laws for solid and safe innovation, and unstable global venture capital funding. We argue that Vietnam could benefit from more policy learning on AI from other jurisdictions such as the European Union and China. Drawing on policy documents from these three jurisdictions, this paper aims to elucidate the core issues shaping the nation's AI development and governance strategies. Our findings shed light on the areas Vietnam should prioritise in order to propel itself into a significant role within the regional and international AI spheres. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Verifying AI: will Singapore's experiment with AI governance set the benchmark?
- Author
-
Lim, Sun Sun and Chng, Gerry
- Subjects
GENERATIVE artificial intelligence ,ARTIFICIAL intelligence ,CHATGPT ,HIGH technology industries ,PUBLIC-private sector cooperation - Abstract
The rise of generative AI programmes like ChatGPT, Gemini, and Midjourney has generated both fascination and apprehension in society. While the possibilities of generative AI seem boundless, concerns about ethical violations, disinformation, and job displacement have ignited anxieties. The Singapore government established the AI Verify Foundation in June 2023 to address these issues in collaboration with major tech companies like Aicadium, Google, IBM, IMDA, Microsoft, Red Hat, and Salesforce, alongside numerous general members. This public-private partnership aims to promote the development and adoption of an open-source testing tool for fostering responsible AI usage through engaging the global community. The foundation also seeks to promote AI testing via education, outreach, and as a neutral platform for collaboration. This initiative reflects a potential governance model for AI that balances public interests with commercial agendas. This article analyses the foundation's efforts and the AI Verify testing framework and toolkit to identify strengths, potential and limitations to distil key takeaways for establishing practicable solutions for AI governance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Opening the 'black box' of algorithms: regulation of algorithms in China.
- Author
-
Xu, Jian
- Subjects
GENERATIVE artificial intelligence ,INTERNET governance ,ELECTRONIC commerce ,VALUE orientations ,PUBLIC administration - Abstract
This article maps the trajectory of China's regulation of algorithms via policy review. It divides China's governing progress into three phases: the 'post-event policy response and penalty' phase, the 'ethics guidelines, guiding opinions and self-discipline pacts' phase and the 'legislation and implementation' phase. The paper argues that the ideological and political implications of algorithmic applications are the highest concern for Chinese regulators. China's regulation of algorithms follows a 'state-centric multilateral model' – the same model used for its internet governance. The 'algorithmic transparency' advocated by regulators is currently only limited to algorithms in the platform economy and industries rather than those used for government decision-making and public administration. As the first nation to issue laws regulating algorithms and generative AI, China faces problems and challenges emerging from further implementing the laws. China's experience will provide valuable first-hand understandings for countries currently creating legal frameworks to regulate algorithms and AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Critical Criteria for AI Impact Assessment: A Proposal, Applied on Current Standards.
- Author
-
Skoric, Vanja, Sileno, Giovanni, and Ghebreab, Sennay
- Subjects
ARTIFICIAL intelligence ,TECHNOLOGY assessment ,INTERNET governance ,TECHNOLOGY ,HUMAN rights - Abstract
Standardisation processes for the assessment of the impact of artificial intelligence (AI) with regards to ethical and societal concerns are underway, striving to address recent normative requirements for AI development and use. This paper examines contemporary standard-setting efforts and AI impact assessment debates to identify a reference set of criteria for the assessment process and demonstrates how to apply these on relevant standards. To build this reference, the paper reviews existing research on impact assessments in comparable areas (privacy, environment, health), examines relevant discussions on AI assessment processes and methods, seeking common elements to identify potential criteria for an effective and meaningful assessment. The paper then structures the core elements of AI impact assessment discussed in the literature in five dimensions, forming an organic whole: normative framework, process rules, methodology, engagement, and oversight. Within each of these dimensions, the paper proposes a set of critical criteria for meaningful impact assessment by integrating reflections on the challenges raised in the literature, identified gaps and pitfalls. Applying the proposed set of criteria, the paper analyses to what extent AI impact assessment processes developed by the standard bodies with international outreach (ISO, NIST and IEEE) are meaningful and effective for addressing ethical and human rights considerations. The resulting framework of criteria aims to support AI governance through offering a practical understanding of the challenges observed in the literature, potentially useful to developers, regulators and standardisation initiatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. AI policymaking as drama
- Author
-
Alison Powell and Fenwick McKelvey
- Subjects
policy ,drama ,AI governance ,Canada ,United Kingdom ,critical policy studies ,Social sciences (General) ,H1-99 - Abstract
As two researchers faced with the prospect of still more knowledge mobilisation, and still more consultation, our manuscript critically reflects on strategies for engaging with consultations as critical questions in critical AI studies. Our intervention reflects on the often-ambivalent roles of researchers and ‘experts’ in the production, contestation, and transformation of consultations and the publicities therein concerning AI. Although ‘AI’ is increasingly becoming a marketing term, there are still substantive strategic efforts toward developing AI industries. These policy consultations do open opportunities for experts like the authors to contribute to public discourse and policy practice on AI. Regardless, in the process of negotiating and developing around these initiatives, a range of dominant publicities emerge, including inevitability and hype. We draw on our experiences contributing to AI policy-making processes in two Global North countries. Resurfacing long-standing critical questions about participation in policymaking, our manuscript reflects on the possibilities of critical scholarship faced with the uncertainty in the rhetoric of democracy and public engagement.
- Published
- 2024
- Full Text
- View/download PDF
33. Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing
- Author
-
Roba Alsaigh, Rashid Mehmood, Iyad Katib, Xiaohui Liang, Abdullah Alshanqiti, Juan M. Corchado, and Simon See
- Subjects
neuroinformatics ,privacy ,data sharing ,ethical AI ,AI governance ,regulatory frameworks ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2024
- Full Text
- View/download PDF
34. Why we need to be careful with LLMs in medicine
- Author
-
Jean-Christophe Bélisle-Pipon
- Subjects
artificial intelligence ,AI ethics ,LLM ,medicine ,AI regulation ,AI governance ,Medicine (General) ,R5-920 - Published
- 2024
- Full Text
- View/download PDF
35. A comprehensive review of techniques for documenting artificial intelligence
- Author
-
Königstorfer, Florian
- Published
- 2024
- Full Text
- View/download PDF
36. AI metrics and policymaking: assumptions and challenges in the shaping of AI
- Author
-
Sioumalas-Christodoulou, Konstantinos and Tympas, Aristotle
- Published
- 2025
- Full Text
- View/download PDF
37. AI governance: a systematic literature review
- Author
-
Batool, Amna, Zowghi, Didar, and Bano, Muneera
- Published
- 2025
- Full Text
- View/download PDF
38. Insights into suggested Responsible AI (RAI) practices in real-world settings: a systematic literature review
- Author
-
Bach, Tita Alissa, Kaarstad, Magnhild, Solberg, Elizabeth, and Babic, Aleksandar
- Published
- 2025
- Full Text
- View/download PDF
39. Gender Mainstreaming into African Artificial Intelligence Policies: Egypt, Rwanda and Mauritius as Case Studies
- Author
-
Ifeoma E Nwafor
- Subjects
gender mainstreaming ,artificial intelligence ,african artificial intelligence policies ,ai governance ,gender and ai ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
Bias, particularly gender bias, is common in artificial intelligence (AI) systems, leading to harmful impacts that reinforce existing negative gender stereotypes and prejudices. Although gender mainstreaming is topical and fashionable in written discourse, it is yet to be thoroughly implemented in practice. While the clamour for AI regulation is commonplace globally, most government policies on the topic do not adequately account for gender inequities. Africa, Egypt, Rwanda and Mauritius are at the forefront of AI policy development. By exploring these three countries as case studies, employing a feminist approach and using the African Union Strategy for Gender Equality & Women’s Empowerment for 2018–2028 as a methodological guide, this study undertakes a comparative analysis of the gender considerations in their policy approaches to AI. It was found that a disconnect exists between gender equality/responsiveness and the AI strategies of these countries, showing that gender has yet to be mainstreamed into these policies. The study provides key recommendations that offer an opportunity for African countries to be innovative leaders in AI governance by developing even more robust policies compared with Western AI policies that fail to adequately address gender.
- Published
- 2024
- Full Text
- View/download PDF
40. Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.
- Author
-
Buhmann, Alexander and Fieseler, Christian
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,DELIBERATIVE democracy - Abstract
Responsible innovation in artificial intelligence (AI) calls for public deliberation: well-informed "deep democratic" debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of actors from the AI industry within a deliberative system. We develop a new framework of responsibilities for AI innovation as well as a deliberative governance approach for enacting these responsibilities. In elucidating this approach, we show how actors from the AI industry can most effectively engage with experts and nonexperts in different social venues to facilitate well-informed judgments on opaque AI systems and thus effectuate their democratic governance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility.
- Author
-
Ridzuan, Nurhadhinah Nadiah, Masri, Masairol, Anshari, Muhammad, Fitriyani, Norma Latif, and Syafrudin, Muhammad
- Subjects
- *
ARTIFICIAL intelligence , *CRITICAL success factor , *DATA privacy , *BANKING industry , *CREDIT analysis - Abstract
This study examines the applications, benefits, challenges, and ethical considerations of artificial intelligence (AI) in the banking and finance sectors. It reviews current AI regulation and governance frameworks to provide insights for stakeholders navigating AI integration. A descriptive analysis based on a literature review of recent research is conducted, exploring AI applications, benefits, challenges, regulations, and relevant theories. This study identifies key trends and suggests future research directions. The major findings include an overview of AI applications, benefits, challenges, and ethical issues in the banking and finance industries. Recommendations are provided to address these challenges and ethical issues, along with examples of existing regulations and strategies for implementing AI governance frameworks within organizations. This paper highlights innovation, regulation, and ethical issues in relation to AI within the banking and finance sectors. Analyzes the previous literature, and suggests strategies for AI governance framework implementation and future research directions. Innovation in the applications of AI integrates with fintech, such as preventing financial crimes, credit risk assessment, customer service, and investment management. These applications improve decision making and enhance the customer experience, particularly in banks. Existing AI regulations and guidelines include those from Hong Kong SAR, the United States, China, the United Kingdom, the European Union, and Singapore. Challenges include data privacy and security, bias and fairness, accountability and transparency, and the skill gap. Therefore, implementing an AI governance framework requires rules and guidelines to address these issues. This paper makes recommendations for policymakers and suggests practical implications in reference to the ASEAN guidelines for AI development at the national and regional levels. Future research directions, a combination of extended UTAUT, change theory, and institutional theory, as well as the critical success factor, can fill the theoretical gap through mixed-method research. In terms of the population gap can be addressed by research undertaken in a nation where fintech services are projected to be less accepted, such as a developing or Islamic country. In summary, this study presents a novel approach using descriptive analysis, offering four main contributions that make this research novel: (1) the applications of AI in the banking and finance industries, (2) the benefits and challenges of AI adoption in these industries, (3) the current AI regulations and governance, and (4) the types of theories relevant for further research. The research findings are expected to contribute to policy and offer practical implications for fintech development in a country. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Generative AI in Japan: Current Trends and Future Directions. "Responding to emerging issues related to generative AI and copyright law, etc.".
- Author
-
Hajime IDEI
- Abstract
In March 2017, the Intellectual Property Strategy Headquarters of the Cabinet Office organized a list of possible issues and their ideas, as generated AI and AI products could create new problems related to copyright law etc.. At the time, however, there were few practical examples of generated AI, and specific studies were to be conducted in future changing o AI technology. Almost seven years have passed since then, and things have changed dramatically. While the rapid penetration of generative AI is bringing benefits to our lives, it is also revealing fears and concerns. So, I will analyze the issues related to generative AI and copyright law, etc. and concern how to respond to them. [ABSTRACT FROM AUTHOR]
- Published
- 2024
43. Legislative and Ethical Foundations for Future Artificial Intelligence.
- Author
-
Hadi, Mohammed Hasan and Jasim, Asmaa Ali
- Subjects
ARTIFICIAL intelligence ,UNFUNDED mandates ,RESEARCH & development ,SELF regulation ,EXPERTISE - Abstract
Copyright of Journal of the College Of Basic Education is the property of Republic of Iraq Ministry of Higher Education & Scientific Research (MOHESR) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
44. Enhancing ethical codes with artificial intelligence governance – a growing necessity for the adoption of generative AI in counselling.
- Author
-
Ooi, Pei Boon and Wilkinson, Graeme
- Subjects
- *
GENERATIVE artificial intelligence , *CODES of ethics , *ARTIFICIAL intelligence , *LANGUAGE models , *COUNSELING - Abstract
The advent of generative Artificial Intelligence (AI) systems, such as large language model chatbots, is likely to have a significant impact in psychotherapy and counselling in the future. In this paper we consider the current state of AI in psychotherapy and counselling and the likely evolution of this field. We examine the ethical codes of practice for counselling in four countries in different parts of the world, namely the UK, the USA, Australia and Malaysia, and identify aspects of these codes that will need enhancement to reflect good AI governance. Using the Model Artificial Intelligence Governance Framework as an example, we have identified how the key elements of the AI framework relate to the core elements of the ethical codes, as a pointer to how such ethical codes will need to be enhanced if generative AI systems are to be adopted by the counselling profession. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering.
- Author
-
Lu, Qinghua, Zhu, Liming, Xu, Xiwei, Whittle, Jon, Zowghi, Didar, and Jacquet, Aurelie
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *DEEP reinforcement learning , *REINFORCEMENT learning , *FAILURE mode & effects analysis , *CAPABILITY maturity model , *SOFTWARE engineering - Published
- 2024
- Full Text
- View/download PDF
46. Artificial intelligence governance: Ethical considerations and implications for social responsibility.
- Author
-
Camilleri, Mark Anthony
- Subjects
- *
ARTIFICIAL intelligence , *SOCIAL responsibility , *SOCIAL impact , *SOCIAL responsibility of business , *EXPERT systems , *CONSCIOUSNESS raising - Abstract
A number of articles are increasingly raising awareness on the different uses of artificial intelligence (AI) technologies for customers and businesses. Many authors discuss about their benefits and possible challenges. However, for the time being, there is still limited research focused on AI principles and regulatory guidelines for the developers of expert systems like machine learning (ML) and/or deep learning (DL) technologies. This research addresses this knowledge gap in the academic literature. The objectives of this contribution are threefold: (i) It describes AI governance frameworks that were put forward by technology conglomerates, policy makers and by intergovernmental organizations, (ii) It sheds light on the extant literature on 'AI governance' as well as on the intersection of 'AI' and 'corporate social responsibility' (CSR), (iii) It identifies key dimensions of AI governance, and elaborates about the promotion of accountability and transparency; explainability, interpretability and reproducibility; fairness and inclusiveness; privacy and safety of end users, as well as on the prevention of risks and of cyber security issues from AI systems. This research implies that all those who are involved in the research, development and maintenance of AI systems, have social and ethical responsibilities to bear toward their consumers as well as to other stakeholders in society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Developing a Framework for Self-regulatory Governance in Healthcare AI Research: Insights from South Korea.
- Author
-
Kim, Junhewk, Kim, So Yoon, Kim, Eun-Ae, Sim, Jin-Ah, Lee, Yuri, and Kim, Hannah
- Subjects
- *
ARTIFICIAL intelligence , *RESEARCH ethics , *RESEARCH personnel , *MEDICAL care , *CLINICAL trials - Abstract
This paper elucidates and rationalizes the ethical governance system for healthcare AI research, as outlined in the 'Research Ethics Guidelines for AI Researchers in Healthcare' published by the South Korean government in August 2023. In developing the guidelines, a four-phase clinical trial process was expanded to six stages for healthcare AI research: preliminary ethics review (stage 1); creating datasets (stage 2); model development (stage 3); training, validation, and evaluation (stage 4); application (stage 5); and post-deployment monitoring (stage 6). Researchers identified similarities between clinical trials and healthcare AI research, particularly in research subjects, management and regulations, and application of research results. In the step-by-step articulation of ethical requirements, this similarity benefits from a reliable and flexible use of existing research ethics governance resources, research management, and regulatory functions. In contrast to clinical trials, this procedural approach to healthcare AI research governance effectively highlights the distinct characteristics of healthcare AI research in research and development process, evaluation of results, and modifiability of findings. The model exhibits limitations, primarily in its reliance on self-regulation and lack of clear delineation of responsibilities. While formulated through multidisciplinary deliberations, its application in the research field remains untested. To overcome the limitations, the researchers' ongoing efforts for educating AI researchers and public and the revision of the guidelines are expected to contribute to establish an ethical research governance framework for healthcare AI research in the South Korean context in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Should Fairness be a Metric or a Model? A Model-based Framework for Assessing Bias in Machine Learning Pipelines.
- Author
-
Lalor, John P., Abbasi, Ahmed, Oketch, Kezia, Yang, Yi, and Forsgren, Nicole
- Abstract
The article introduces a model-based framework, FAIR-Frame, for assessing bias in machine learning pipelines, emphasizing the need to consider both upstream representational harm and downstream allocational harm. Topics include the limitations of existing fairness metrics, the proposed framework's evaluation on text classification tasks, and its implications for various machine learning contexts.
- Published
- 2024
- Full Text
- View/download PDF
49. Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics.
- Author
-
David, Prabu, Choung, Hyesun, and Seberger, John S.
- Subjects
PUBLIC opinion ,TRUST ,ARTIFICIAL intelligence ,NETWORK governance ,ETHICS ,VALUES (Ethics) - Abstract
The governance of artificial intelligence (AI) is an urgent challenge that requires actions from three interdependent stakeholders: individual citizens, technology corporations, and governments. We conducted an online survey (N = 525) of US adults to examine their beliefs about the governance responsibility of these stakeholders as a function of trust and AI ethics. Different dimensions of trust and different ethical concerns were associated with beliefs in governance responsibility of the three stakeholders. Specifically, belief in the governance responsibility of the government was associated with ethical concerns about AI, whereas belief in governance responsibility of corporations was related to both ethical concerns and trust in AI. Belief in governance responsibility of individuals was related to human-centered values of trust in AI and fairness. Overall, the findings point to the need for an interdependent framework in which citizens, corporations, and governments share governance responsibilities, guided by trust and ethics as the guardrails. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Gender Mainstreaming into African Artificial Intelligence Policies: Egypt, Rwanda and Mauritius as Case Studies.
- Author
-
Nwafor, Ifeoma E.
- Subjects
ARTIFICIAL intelligence ,GENDER stereotypes ,PREJUDICES ,GOVERNMENT policy ,FEMINISM - Abstract
Bias, particularly gender bias, is common in artificial intelligence (AI) systems, leading to harmful impacts that reinforce existing negative gender stereotypes and prejudices. Although gender mainstreaming is topical and fashionable in written discourse, it is yet to be thoroughly implemented in practice. While the clamour for AI regulation is commonplace globally, most government policies on the topic do not adequately account for gender inequities. In Africa, Egypt, Rwanda and Mauritius are at the forefront of AI policy development. By exploring these three countries as case studies, employing a feminist approach and using the African Union Strategy for Gender Equality & Women’s Empowerment for 2018–2028 as a methodological guide, this study undertakes a comparative analysis of the gender considerations in their policy approaches to AI. It found that a disconnect exists between gender equality/responsiveness and the AI strategies of these countries, showing that gender has yet to be mainstreamed into these policies. The study provides key recommendations that offer an opportunity for African countries to be innovative leaders in AI governance by developing even more robust policies compared with Western AI policies that fail to adequately address gender. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.