265 results
Search Results
2. Annual Proceedings of Selected Papers on the Practice of Educational Communications and Technology Presented at the Annual Convention of the Association for Educational Communications and Technology (42nd, Las Vegas, Nevada, 2019). Volume 2
- Author
-
Association for Educational Communications and Technology, Simonson, Michael, and Seepersaud, Deborah
- Abstract
For the forty-second time, the Association for Educational Communications and Technology (AECT) is sponsoring the publication of these Proceedings. Papers published in this volume were presented at the annual AECT Convention in Las Vegas, Nevada. The Proceedings of AECT's Convention are published in two volumes. Volume 1 contains papers dealing primarily with research and development topics. Twenty-three papers dealing with the practice of instructional technology including instruction and training issues are contained in Volume 2. [For Volume 1, see ED609416.]
- Published
- 2019
3. Annual Proceedings of Selected Research and Development Papers Presented at the Annual Convention of the Association for Educational Communications and Technology (42nd, Las Vegas, Nevada, 2019). Volume 1
- Author
-
Association for Educational Communications and Technology, Simonson, Michael, and Seepersaud, Deborah
- Abstract
For the forty-second time, the Association for Educational Communications and Technology (AECT) is sponsoring the publication of these Proceedings. Papers published in this volume were presented at the annual AECT Convention in Las Vegas, Nevada. The Proceedings of AECT's Convention are published in two volumes. Volume 1 contains 37 papers dealing primarily with research and development topics. Papers dealing with the practice of instructional technology including instruction and training issues are contained in Volume 2. [For Volume 2, see ED609417.]
- Published
- 2019
4. A selection of papers from MICCAI 2004: the marriage of data and prior information.
- Author
-
Haynor DR, Barillot C, and Hellier P
- Subjects
- Congresses as Topic, Diagnostic Imaging trends, Publications, Subtraction Technique, Surgery, Computer-Assisted trends, United States, Algorithms, Artificial Intelligence, Diagnostic Imaging methods, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Models, Biological, Surgery, Computer-Assisted methods
- Published
- 2005
- Full Text
- View/download PDF
5. The health risks of generative AI-based wellness apps.
- Author
-
De Freitas J and Cohen IG
- Subjects
- Humans, Health Promotion, United States, Telemedicine, Mobile Applications, Artificial Intelligence, Mental Health
- Abstract
Artificial intelligence (AI)-enabled chatbots are increasingly being used to help people manage their mental health. Chatbots for mental health and particularly 'wellness' applications currently exist in a regulatory 'gray area'. Indeed, most generative AI-powered wellness apps will not be reviewed by health regulators. However, recent findings suggest that users of these apps sometimes use them to share mental health problems and even to seek support during crises, and that the apps sometimes respond in a manner that increases the risk of harm to the user, a challenge that the current US regulatory structure is not well equipped to address. In this Perspective, we discuss the regulatory landscape and potential health risks of AI-enabled wellness apps. Although we focus on the United States, there are similar challenges for regulators across the globe. We discuss the problems that arise when AI-based wellness apps cross into medical territory and the implications for app developers and regulatory bodies, and we outline outstanding priorities for the field., (© 2024. Springer Nature America, Inc.)
- Published
- 2024
- Full Text
- View/download PDF
6. AI, Biometric Analysis, and Emerging Cheating Detection Systems: The Engineering of Academic Integrity?
- Author
-
Oravec, Jo Ann
- Abstract
Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic institutions as they increasingly conduct educational processes online and impose metrics on instructional evaluation. Research, development, and implementation initiatives on cheating detection have gained new dimensions in the advent of artificial intelligence (AI) applications; they have also engendered special challenges in terms of their social, ethical, and cultural implications. An assortment of commercial cheating-detection systems have been injected into educational contexts with little input on the part of relevant stakeholders. This paper expands several specific cases of how systems for the detection of cheating have recently been implemented in higher education institutions in the US and UK. It investigates how such vehicles as wearable technologies, eye scanning, and keystroke capturing are being used to collect the data used for anti-cheating initiatives, often involving systems that have not gone through rigorous testing and evaluation for their validity and potential educational impacts. The paper discusses accountability- and policy-related issues concerning the outsourcing of cheating detection in institutional settings in the light of these emerging technological practices as well as student resistance against the systems involved. The cheating-detection practices can place students in a disempowered, asymmetrical position that is often at substantial variance with their cultural backgrounds.
- Published
- 2022
7. Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape
- Author
-
Bozkurt, Aras, Xiao, Junhong, Lambert, Sarah, Pazurek, Angelica, Crompton, Helen, Koseoglu, Suzan, Farrow, Robert, Bond, Melissa, Nerantzi, Chrissi, Honeychurch, Sarah, Bali, Maha, Dron, Jon, Mir, Kamran, Stewart, Bonnie, Costello, Eamon, Mason, Jon, Stracke, Christian M., Romero-Hall, Enilda, Koutropoulos, Apostolos, Toquero, Cathy Mae, Singh, Lenandlar, Tlili, Ahm, Lee, Kyungmee, Nichols, Mark, Ossiannilsson, Ebba, Brown, Mark, Irvine, Valerie, Raffaghelli, Juliana Elisa, Santos-Hermosa, Gema, Farrell, Orna, Adam, Taskeen, Thong, Ying Li, Sani-Bozkurt, Sunagul, Sharma, Ramesh C., Hrastinski, Stefan, and Jandric, Petar
- Abstract
While ChatGPT has recently become very popular, AI has a long history and philosophy. This paper intends to explore the promises and pitfalls of the Generative Pre-trained Transformer (GPT) AI and potentially future technologies by adopting a speculative methodology. Speculative future narratives with a specific focus on educational contexts are provided in an attempt to identify emerging themes and discuss their implications for education in the 21st century. Affordances of (using) AI in Education (AIEd) and possible adverse effects are identified and discussed which emerge from the narratives. It is argued that now is the best of times to define human vs AI contribution to education because AI can accomplish more and more educational activities that used to be the prerogative of human educators. Therefore, it is imperative to rethink the respective roles of technology and human educators in education with a future-oriented mindset.
- Published
- 2023
8. Artificial Intelligence in Science Education: A Bibliometric Review
- Author
-
Roza S. Akhmadieva, Natalia N. Udina, Yuliya P. Kosheleva, Sergei P. Zhdanov, Maria O. Timofeeva, and Roza L. Budkevich
- Abstract
A descriptive bibliometric analysis of works on artificial intelligence (AI) in science education is provided in this article to help readers understand the state of the field's research at the time. This study's main objective is to give bibliometric data on publications regarding AI in science education printed in periodicals listed in the Scopus database between 2002 and 2023 end of May. The data gathered from publications scanned and published within the study's parameters was subjected to descriptive bibliometric analysis based on seven categories: number of articles and citations per year, countries with the most publications, most productive author, most significant affiliation, funding institutions, publication source and subject areas. Most of the papers were published between 2016 and 2022. The United States of America, United Kingdom, and China were the top-3 most productive nations, with the United States of America producing the most publications. The number of citations to the publications indexed in Scopus database increased in a progressive way and reached to maximum number in 2022 with 178 citations. Most productive author on this topic was Salles, P. with four publications. Moreover, Carnegie Mellon University, University of Memphis, and University of Southern California have the maximum number of publications as affiliations. The National Science Foundation was the leader funding institution in terms of number of publications produced. In addition, "Proceedings Frontiers in Education Conference Fie" have the highest number of publications by year as a publication source. Distribution of the publications by subject area was analyzed. The subject areas of the publications were computer sciences, social sciences, science education, technology and engineering education respectively. This study presents a vision for future research and provides a global perspective on AI in science education.
- Published
- 2023
9. The role of machine learning in clinical research: transforming the future of evidence generation.
- Author
-
Weissler EH, Naumann T, Andersson T, Ranganath R, Elemento O, Luo Y, Freitag DF, Benoit J, Hughes MC, Khan F, Slater P, Shameer K, Roe M, Hutchison E, Kollins SH, Broedl U, Meng Z, Wong JL, Curtis L, Huang E, and Ghassemi M
- Subjects
- Humans, United States, United States Food and Drug Administration, Artificial Intelligence, Machine Learning
- Abstract
Background: Interest in the application of machine learning (ML) to the design, conduct, and analysis of clinical trials has grown, but the evidence base for such applications has not been surveyed. This manuscript reviews the proceedings of a multi-stakeholder conference to discuss the current and future state of ML for clinical research. Key areas of clinical trial methodology in which ML holds particular promise and priority areas for further investigation are presented alongside a narrative review of evidence supporting the use of ML across the clinical trial spectrum., Results: Conference attendees included stakeholders, such as biomedical and ML researchers, representatives from the US Food and Drug Administration (FDA), artificial intelligence technology and data analytics companies, non-profit organizations, patient advocacy groups, and pharmaceutical companies. ML contributions to clinical research were highlighted in the pre-trial phase, cohort selection and participant management, and data collection and analysis. A particular focus was paid to the operational and philosophical barriers to ML in clinical research. Peer-reviewed evidence was noted to be lacking in several areas., Conclusions: ML holds great promise for improving the efficiency and quality of clinical research, but substantial barriers remain, the surmounting of which will require addressing significant gaps in evidence., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
10. Proceedings of the International Association for Development of the Information Society (IADIS) International Conference on E-Learning (Lisbon, Portugal, July 20-22, 2017)
- Author
-
International Association for Development of the Information Society (IADIS), Nunes, Miguel Baptista, McPherson, Maggie, Kommers, Piet, and Isaias, Pedro
- Abstract
These proceedings contain the papers of the International Conference e-Learning 2017, which was organised by the International Association for Development of the Information Society, 20-22 July, 2017. This conference is part of the Multi Conference on Computer Science and Information Systems 2017, 20-23 July, which had a total of 652 submissions. The e-Learning (EL) 2017 conference aims to address the main issues of concern within e-Learning. This conference covers both technical as well as the non-technical aspects of e-Learning. The conference accepted submissions in the following seven main areas: (1) Organisational Strategy and Management Issues; (2) Technological Issues; (3) e-Learning Curriculum Development Issues; (4) Instructional Design Issues; (5) e-Learning Delivery Issues; (6) e-Learning Research Methods and Approaches; and (7) e-Skills and Information Literacy for Learning. The conference also included one keynote presentation from Thomas C. Reeves, Professor Emeritus of Learning, Design and Technology, College of Education, The University of Georgia, USA. The full papers presented at these proceedings include: (1) Game Changer For Online Learning Driven by Advances in Web Technology (Manfred Kaul, André Kless, Thorsten Bonne and Almut Rieke); (2) E-Learning Instructional Design Practice in American and Australian Institutions (Sayed Hadi Sadeghi); (3) A Game Based E-Learning System to Teach Artificial Intelligence in the Computer Sciences Degree (Amable de Castro-Santos, Waldo Fajardo and Miguel Molina-Solana); (4) The Next Stage Of Development of e-Learning at UFH in South Africa (Graham Wright, Liezel Cilliers, Elzette Van Niekerk and Eunice Seekoe); (5) Effect of Internet-Based Learning in Public Health Training: An Exploratory Meta-Analysis (Ying Peng and Weirong Yan); (6) Enhancing a Syllabus for Intermediate ESL Students with BYOD Interventions (Ewa Kilar-Magdziarz); (7) Post Graduations in Technologies and Computing Applied to Education: From F2F Classes to Multimedia Online Open Courses (Bertil P. Marques, Piedade Carvalho, Paula Escudeiro, Ana Barata, Ana Silva and Sandra Queiros); (8) Towards Architecture for Pedagogical and Game Scenarios Adaptation in Serious Games (Wassila Debabi and Ronan Champagnat); (9) Semantic Modelling for Learning Styles and Learning Material in an e-Learning Environment (Khawla Alhasan, Liming Chen and Feng Chen); (10) Physical Interactive Game for Enhancing Language Cognitive Development of Thai Pre-Schooler (Noppon Choosri and Chompoonut Pookao); (11) From a CV to an e-Portfolio: An Exploration of Adult Learner's Perception of the ePortfolio as a Job Seeking Tool (John Kilroy); (12) The Emotional Geographies of Parent Participation in Schooling: Headteachers' Perceptions in Taiwan (Hsin-Jen Chen and Ya-Hsuan Wang); (13) Geopolitical E-Analysis Based on E-Learning Content (Anca Dinicu and Romana Oancea); (14) Predictors of Student Performance in a Blended-Learning Environment: An Empirical Investigation (Lan Umek, Nina Tomaževic, Aleksander Aristovnik and Damijana Keržic); (15) Practice of Organisational Strategies of Improving Computer Rooms for Promoting Smart Education Using ICT Equipment (Nobuyuki Ogawa and Akira Shimizu); (16) Why Do Learners Choose Online Learning: The Learners' Voices (Hale Ilgaz and Yasemin Gulbahar); and (17) Enhancing Intercultural Competence of Engineering Students via GVT (Global Virtual Teams)-Based Virtual Exchanges: An International Collaborative Course in Intralogistics Education (Rui Wang, Friederike Rechl, Sonja Bigontina, Dianjun Fang, Willibald A. Günthner and Johannes Fottner). Short papers presented include: (1) Exploring Characteristics of Fine-Grained Behaviors of Learning Mathematics in Tablet-Based E-Learning Activities (Cheuk Yu Yeung, Kam Hong Shum, Lucas Chi Kwong Hui, Samuel Kai Wah Chu, Tsing Yun Chan, Yung Nin Kuo and Yee Ling Ng); (2) Breaking the Gendered-Technology Phenomenon in Taiwan's Higher Education (Ya-Hsuan Wang); (3) Ontology-Based Learner Categorization through Case Based Reasoning and Fuzzy Logic (Sohail Sarwar, Raul García-Castro, Zia Ul Qayyum, Muhammad Safyan and Rana Faisal Munir); (4) Learning Factory--Integrative E-Learning (Peter Steininger); (5) Intercultural Sensibility in Online Teaching and Learning Processes (Eulalia Torras and Andreu Bellot); (6) Mobile Learning on the Basis of the Cloud Services (Tatyana Makarchuk); (7) Personalization of Learning Activities within a Virtual Environment for Training Based on Fuzzy Logic Theory (Fahim Mohamed, Jakimi Abdeslam and El Bermi Lahcen); and (8) Promoting Best Practices in Teaching and Learning in Nigerian Universities through Effective E-Learning: Prospects and Challenges (Grace Ifeoma Obuekwe and Rose-Ann Ifeoma Eze). Reflection papers include the following: (1) A Conceptual Framework for Web-Based Learning Design (Hesham Alomyan); (2) The Key to Success in Electronic Learning: Faculty Training and Evaluation (Warren Matthews and Albert Smothers); (3) Using Games, Comic Strips, and Maps to Enhance Teacher Candidates' e-Learning Practice in The Social Studies (Nancy B. Sardone); (4) Scanner Based Assessment in Exams Organized with Personalized Thesis Randomly Generated via Microsoft Word (Romeo Teneqexhi, Margarita Qirko, Genci Sharko, Fatmir Vrapi and Loreta Kuneshka); (5) Designing a Web-Based Asynchronous Innovation/Entrepreneurism Course (Parviz Ghandforoush); and (6) Semantic Annotation of Resources to Learn with Connected Things (Aymeric Bouchereau and Ioan Roxin). Posters include: (1) Development of a Framework for MOOC in Continuous Training (Carolina Amado and Ana Pedro); and (2) Information Literacy in the 21st Century: Usefulness and Ease of Learning (Patricia Fidalgo and Joan Thormann). Also included is a Doctorial Consortium: E-Learning Research and Development: On Evaluation, Learning Performance, and Visual Attention (Marco Ruth). An author index is provided and individual papers include references.
- Published
- 2017
11. Proceedings of the International Association for Development of the Information Society (IADIS) International Conference on Cognition and Exploratory Learning in Digital Age (CELDA) (Madrid, Spain, October 19-21, 2012)
- Author
-
International Association for Development of the Information Society (IADIS)
- Abstract
The IADIS CELDA 2012 Conference intention was to address the main issues concerned with evolving learning processes and supporting pedagogies and applications in the digital age. There had been advances in both cognitive psychology and computing that have affected the educational arena. The convergence of these two disciplines is increasing at a fast pace and affecting academia and professional practice in many ways. Paradigms such as just-in-time learning, constructivism, student-centered learning and collaborative approaches have emerged and are being supported by technological advancements such as simulations, virtual reality and multi-agents systems. These developments have created both opportunities and areas of serious concerns. This conference aimed to cover both technological as well as pedagogical issues related to these developments. The IADIS CELDA 2012 Conference received 98 submissions from more than 24 countries. Out of the papers submitted, 29 were accepted as full papers. In addition to the presentation of full papers, short papers and reflection papers, the conference also includes a keynote presentation from internationally distinguished researchers. Individual papers contain figures, tables, and references.
- Published
- 2012
12. International evaluation of an AI system for breast cancer screening.
- Author
-
McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, Back T, Chesus M, Corrado GS, Darzi A, Etemadi M, Garcia-Vicente F, Gilbert FJ, Halling-Brown M, Hassabis D, Jansen S, Karthikesalingam A, Kelly CJ, King D, Ledsam JR, Melnick D, Mostofi H, Peng L, Reicher JJ, Romera-Paredes B, Sidebottom R, Suleyman M, Tse D, Young KC, De Fauw J, and Shetty S
- Subjects
- Female, Humans, Mammography standards, Reproducibility of Results, United Kingdom, United States, Artificial Intelligence standards, Breast Neoplasms diagnostic imaging, Early Detection of Cancer methods, Early Detection of Cancer standards
- Abstract
Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful
1 . Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives2 . Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.- Published
- 2020
- Full Text
- View/download PDF
13. News feature and call for papers.
- Author
-
D’Cruz, B. and Sitnikov, D.
- Subjects
- *
EXPERT systems , *ARTIFICIAL intelligence , *RESEARCH - Abstract
Provides information on the state of intelligent systems research in Ukraine. Research groups listed by the National Academy of Sciences information center; Centers affiliated to the V.M. Glushkov Institute of Cybernetics; List of publications that universities and scientific institutes have throughout the country.
- Published
- 2002
- Full Text
- View/download PDF
14. The practical implementation of artificial intelligence technologies in medicine.
- Author
-
He J, Baxter SL, Xu J, Xu J, Zhou X, and Zhang K
- Subjects
- Algorithms, Humans, Reference Standards, Social Control, Formal, United States, Artificial Intelligence, Medicine
- Abstract
The development of artificial intelligence (AI)-based technologies in medicine is advancing rapidly, but real-world clinical implementation has not yet become a reality. Here we review some of the key practical issues surrounding the implementation of AI into existing clinical workflows, including data sharing and privacy, transparency of algorithms, data standardization, and interoperability across multiple platforms, and concern for patient safety. We summarize the current regulatory environment in the United States and highlight comparisons with other regions in the world, notably Europe and China.
- Published
- 2019
- Full Text
- View/download PDF
15. Bibliometric analysis of ChatGPT in medicine.
- Author
-
Gande, Sharanya, Gould, Murdoc, and Ganti, Latha
- Subjects
SERIAL publications ,SAFETY ,ARTIFICIAL intelligence ,PRIVACY ,PROFESSIONAL peer review ,MISINFORMATION ,NATURAL language processing ,BIBLIOMETRICS ,PUBLISHING ,MEDICAL research ,ENDOWMENT of research ,MEDICINE ,INTERPERSONAL relations ,OPEN access publishing ,MEDICAL practice ,RELIABILITY (Personality trait) ,MEDICAL ethics ,EVALUATION - Abstract
Introduction: The emergence of artificial intelligence (AI) chat programs has opened two distinct paths, one enhancing interaction and another potentially replacing personal understanding. Ethical and legal concerns arise due to the rapid development of these programs. This paper investigates academic discussions on AI in medicine, analyzing the context, frequency, and reasons behind these conversations. Methods: The study collected data from the Web of Science database on articles containing the keyword "ChatGPT" published from January to September 2023, resulting in 786 medically related journal articles. The inclusion criteria were peer-reviewed articles in English related to medicine. Results: The United States led in publications (38.1%), followed by India (15.5%) and China (7.0%). Keywords such as "patient" (16.7%), "research" (12%), and "performance" (10.6%) were prevalent. The Cureus Journal of Medical Science (11.8%) had the most publications, followed by the Annals of Biomedical Engineering (8.3%). August 2023 had the highest number of publications (29.3%), with significant growth between February to March and April to May. Medical General Internal (21.0%) was the most common category, followed by Surgery (15.4%) and Radiology (7.9%). Discussion: The prominence of India in ChatGPT research, despite lower research funding, indicates the platform's popularity and highlights the importance of monitoring its use for potential medical misinformation. China's interest in ChatGPT research suggests a focus on Natural Language Processing (NLP) AI applications, despite public bans on the platform. Cureus' success in publishing ChatGPT articles can be attributed to its open-access, rapid publication model. The study identifies research trends in plastic surgery, radiology, and obstetric gynecology, emphasizing the need for ethical considerations and reliability assessments in the application of ChatGPT in medical practice. Conclusion: ChatGPT's presence in medical literature is growing rapidly across various specialties, but concerns related to safety, privacy, and accuracy persist. More research is needed to assess its suitability for patient care and implications for non-medical use. Skepticism and thorough review of research are essential, as current studies may face retraction as more information emerges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. The role of artificial intelligence and fintech in promoting eco-friendly investments and non-greenwashing practices in the US market.
- Author
-
Si Mohammed K, Serret V, Ben Jabeur S, and Nobanee H
- Subjects
- Investments, United States, Conservation of Natural Resources methods, Technology, Artificial Intelligence
- Abstract
This study explores the intricate connections among financial technology (FinTech), artificial intelligence (AI), and eco-friendly markets in the US, shedding light on their dynamic interplay and implications for sustainable investment and policy strategies. Specifically, our research delves into the transformative roles of FinTech and AI in broadening financial access, fostering green financing initiatives, and aligning financial practices with environmentally conscious objectives. We also investigate market reactions among the AI, FinTech, non-greenwashing, and eco-friendly markets during exogenous shocks, offering valuable insights into these markets' interconnectedness. An innovative connectedness approach, the R
2 decomposed measures, is employed to capture the contemporaneous and lagged spillover effects using daily data from December 19, 2017, to November 1, 2023. We also focus on constructing a minimum connectedness portfolio using the time-varying parameter vector autoregressive approach. The findings reveal significant volatility connectivity within these intergroups, emphasizing the need for sustainable tech finance policies and real-time monitoring systems to address market fluctuations. Overall, this study contributes to an underexplored area by providing empirical evidence and valuable implications for scholars and policymakers, and can help in guiding sustainable investment and policy strategies aligned with zero-emissions agendas., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 Elsevier Ltd. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
17. 48 Capabilities of Highly Educated People
- Author
-
Greene, Richard Tabor
- Abstract
Purpose: To get beyond religious, philosophic, and political definitions of educatedness by going empirical. To redo Plato, in effect, by defining "the good" empirically. Background: This research was part of the Excellence Science (orthogonal disciplines) Research Project at the University of Chicago. That project redid Plato by defining "the good" empirically using artificial intelligence protocol analysis and total quality process modeling methods embedded in surveys and interview instruments. A sample of eminent people in 63 professions from 41 nations was asked who is top in your field and upon what capability basis did they rise to the top, producing 54 routes to the top of nearly any field, one of which was educatedness. 150 people nominated as top (5+ from each of 63 diverse occupations) in their field due to educatedness were asked what that consisted of, in constituent capability terms. This paper reports a categorical model of their answers and compares it to a categorical model from philosophers of education. Method: Finding highly educated acting people via a double nomination process used by expert system programmers. Finding their capabilities via protocol analysis from artificial intelligence expert system building and process modeling from total quality programs embedded in questionnaire and interview instruments. Sample: 8000+ people, 150 in each of 54 distinct "excellence sciences" (educatedness, effectiveness, creativity, managing complexity, handling error, etc. for 54 routes to the top of nearly any field) from 41 nations (half resident in the USA, half visiting/studying there), were given questionnaires and interviews over a period of years. Analysis: Tens of thousands of answers, that is, individual capabilities, were categories hierarchically and the final hierarchy of categories regularized fractally. The same approach was applied also to texts on educatedness by usual philosophers, politicos, and religious leaders for comparison purposes. Results: 48 capabilities of highly educated people, 3 for each of 16 categories--one such model empirically derived and another such model, for comparison, derived from texts on educatedness. Recommendations: The philosopher text-derived model emphasizes liberation and de-mystification a great deal more than the empiric model. Also, the "virtues" of the empiric model are enormously different than the 18th century style virtues some modern philosophers and educators want us to return to. The model-build and model-apply basis of modern work--so involved in the Wall Street disaster of 2008-9 appears front and center as one fourth of "educatedness" capabilities. Use of the model to assess career success, education curriculum and institution effectiveness, and assessment of biases and limitations in policy communities designing education institutions and initiatives are suggested. Additional data: A book "Are You Educated? 64 Capabilities of Highly Educated People" was derived from this article later (available at scribd.com) and a book "Are You Educated? EU, China, USA, Japan? 300 Capabilities from 5 Models of Educatedness" was also derived later on (and is available at scribd.com).
- Published
- 2008
18. Global research trends and foci of artificial intelligence-based tumor pathology: a scientometric study.
- Author
-
Shen, Zefeng, Hu, Jintao, Wu, Haiyang, Chen, Zeshi, Wu, Weixia, Lin, Junyi, Xu, Zixin, Kong, Jianqiu, and Lin, Tianxin
- Subjects
MASS media ,BIBLIOMETRICS ,ARTIFICIAL intelligence ,COGNITION ,RESEARCH funding ,BREAST tumors - Abstract
Background: With the development of digital pathology and the renewal of deep learning algorithm, artificial intelligence (AI) is widely applied in tumor pathology. Previous researches have demonstrated that AI-based tumor pathology may help to solve the challenges faced by traditional pathology. This technology has attracted the attention of scholars in many fields and a large amount of articles have been published. This study mainly summarizes the knowledge structure of AI-based tumor pathology through bibliometric analysis, and discusses the potential research trends and foci.Methods: Publications related to AI-based tumor pathology from 1999 to 2021 were selected from Web of Science Core Collection. VOSviewer and Citespace were mainly used to perform and visualize co-authorship, co-citation, and co-occurrence analysis of countries, institutions, authors, references and keywords in this field.Results: A total of 2753 papers were included. The papers on AI-based tumor pathology research had been continuously increased since 1999. The United States made the largest contribution in this field, in terms of publications (1138, 41.34%), H-index (85) and total citations (35,539 times). We identified the most productive institution and author were Harvard Medical School and Madabhushi Anant, while Jemal Ahmedin was the most co-cited author. Scientific Reports was the most prominent journal and after analysis, Lecture Notes in Computer Science was the journal with highest total link strength. According to the result of references and keywords analysis, "breast cancer histopathology" "convolutional neural network" and "histopathological image" were identified as the major future research foci.Conclusions: AI-based tumor pathology is in the stage of vigorous development and has a bright prospect. International transboundary cooperation among countries and institutions should be strengthened in the future. It is foreseeable that more research foci will be lied in the interpretability of deep learning-based model and the development of multi-modal fusion model. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
19. Information Technology R&D: Critical Trends and Issues.
- Author
-
Congress of the U.S., Washington, DC. Office of Technology Assessment.
- Abstract
This Office of Technology Assessment report on the current state of research and development in the telecommunications industry in the United States examines four specific areas of research as case studies: computer architecture, artificial intelligence, fiber optics, and software engineering. It discusses the structure and orientation of some selected foreign programs as they challenge traditional U.S. market leadership in some areas of computers and communications. Finally, it examines a set of issues that were raised in the course of the study: manpower, institutional change, the new research organizations that grew out of Bell Laboratories, and the implications of trends in overall science and technology policy. Following an introduction and summary of the report, individual chapters address the following topics: (1) the environment for research and development in information technology in the United States; (2) selected case studies in information technology research and development; (3) effects of deregulation and divestiture on research; (4) education and human resources for research and development; (5) new roles for universities in information technology research and development; (6) foreign information technology research and development; (7) information technology research and development in the context of U.S. science and technology policy; and (8) technology and industry. (JB)
- Published
- 1985
20. Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology.
- Author
-
Balasubramanian, Aadhi Aadhavan, Al-Heejawi, Salah Mohammed Awad, Singh, Akarsh, Breggia, Anne, Ahmad, Bilal, Christman, Robert, Ryan, Stephen T., and Amal, Saeed
- Subjects
BREAST tumor diagnosis ,CANCER invasiveness ,TASK performance ,MEDICAL technology ,BIOINDICATORS ,BREAST tumors ,ARTIFICIAL intelligence ,MEDICAL care ,HOSPITALS ,CAUSES of death ,EVALUATION of medical care ,DESCRIPTIVE statistics ,DEEP learning ,COMPUTER-aided diagnosis ,ARTIFICIAL neural networks ,DIGITAL image processing ,ALGORITHMS ,CARCINOMA in situ - Abstract
Simple Summary: Breast cancer is a significant cause of female cancer-related deaths in the US. Checking how severe the cancer is helps in planning treatment. Modern AI methods are good at grading cancer, but they are not used much in hospitals yet. We developed and utilized ensemble deep learning algorithms for addressing the tasks of classifying (1) breast cancer subtype and (2) breast cancer invasiveness from whole slide image (WSI) histopathology slides. The ensemble models used were based on convolutional neural networks (CNNs) known for extracting distinctive features crucial for accurate classification. In this paper, we provide a comprehensive analysis of these models and the used methodology for breast cancer diagnosis tasks. Cancer diagnosis and classification are pivotal for effective patient management and treatment planning. In this study, a comprehensive approach is presented utilizing ensemble deep learning techniques to analyze breast cancer histopathology images. Our datasets were based on two widely employed datasets from different centers for two different tasks: BACH and BreakHis. Within the BACH dataset, a proposed ensemble strategy was employed, incorporating VGG16 and ResNet50 architectures to achieve precise classification of breast cancer histopathology images. Introducing a novel image patching technique to preprocess a high-resolution image facilitated a focused analysis of localized regions of interest. The annotated BACH dataset encompassed 400 WSIs across four distinct classes: Normal, Benign, In Situ Carcinoma, and Invasive Carcinoma. In addition, the proposed ensemble was used on the BreakHis dataset, utilizing VGG16, ResNet34, and ResNet50 models to classify microscopic images into eight distinct categories (four benign and four malignant). For both datasets, a five-fold cross-validation approach was employed for rigorous training and testing. Preliminary experimental results indicated a patch classification accuracy of 95.31% (for the BACH dataset) and WSI image classification accuracy of 98.43% (BreakHis). This research significantly contributes to ongoing endeavors in harnessing artificial intelligence to advance breast cancer diagnosis, potentially fostering improved patient outcomes and alleviating healthcare burdens. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Segmentation using large language models: A new typology of American neighborhoods.
- Author
-
Singleton, Alex D. and Spielman, Seth
- Subjects
LANGUAGE models ,ARTIFICIAL intelligence ,AMERICAN Community Survey ,NATURAL language processing ,IMAGE segmentation ,SMALL area statistics - Abstract
In the United States, recent changes to the National Statistical System have amplified the geographic-demographic resolution trade-off. That is, when working with demographic and economic data from the American Community Survey, as one zooms in geographically one loses resolution demographically due to very large margins of error. In this paper, we present a solution to this problem in the form of an AI based open and reproducible geodemographic classification system for the United States using small area estimates from the American Community Survey (ACS). We employ a partitioning clustering algorithm to a range of socio-economic, demographic, and built environment variables. Our approach utilizes an open source software pipeline that ensures adaptability to future data updates. A key innovation is the integration of GPT4, a state-of-the-art large language model, to generate intuitive cluster descriptions and names. This represents a novel application of natural language processing in geodemographic research and showcases the potential for human-AI collaboration within the geospatial domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Promoting Hospitals’ Reputation through Smart Branding Initiatives. A Quantitative Analysis of the Best Hospitals in the United States.
- Author
-
Medina Aguerrebere, Pablo, Medina, Eva, and Pacanowski, Toni González
- Subjects
REPUTATION ,LITERATURE reviews ,CORPORATE websites ,HOSPITALS ,ARTIFICIAL intelligence ,QUANTITATIVE research - Abstract
Hospitals use different technological tools to implement corporate communication initiatives and, in this way, improve their relationships with stakeholders (employees, patients, media companies) and build a reputed brand. However, they face different barriers: limited budgets for corporate communication, strict legal frameworks, and stakeholders’ new needs regarding information and emotional support. This paper aims to analyze how the 100 best hospitals in the United States manage smart technologies to promote their brand. To do that, we conducted a literature review about smart hospitals, branding, and corporate communication; and then we defined 34 quantitative indicators to evaluate how the hospitals previously mentioned managed their websites, online newsrooms, about us sections, and artificial intelligence department web sites for reputation purposes. Our results proved that most hospitals respected indicators related to the homepage (8.67/11) but not those referring to online newsrooms (4.44/11) or about us sections (2.66/6). Besides, only 23 hospitals had implemented a department specialized in artificial intelligence that collaborated with external organizations. We concluded that most American hospitals focused their reputation efforts on patients rather than other targets (media companies, employees, suppliers, shareholders); and that these organizations did not integrate enough artificial intelligence projects into their smart branding initiatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Technology Commercialization Activation Model Using Imagification of Variables.
- Author
-
Kim, Youngho, Park, Sangsung, and Kang, Jiho
- Subjects
TECHNOLOGY transfer ,TECHNOLOGICAL innovations ,COMMERCIALIZATION ,DATA augmentation ,IMAGE analysis ,ARTIFICIAL intelligence - Abstract
Various institutions such as universities and corporations strive to commercialize technologies produced through R&D investment. The ideal way to commercialize technology is to transfer it, recognizing the value of the developed technology. Technology transfer is the transfer of technology from R&D entities, such as universities, research institutes, and companies, to others, with the advantage of spreading research results and maximizing cost efficiency. In other words, if enough technology is transferred, it can be commercialized. Although many institutions have various support measures to assist in transferring technology, there is no substitution for quantitative, objective methods. To solve this problem, this paper proposes a technology transfer prediction model based on the information found in patents. However, it is not realistic to include the information from all patents in the quantitative, objective method, so patterns related to technology transfer must be identified to select the appropriate patents that can be used in the predictive model. In addition, a method is needed to address the insufficient training data for the model. Training data are limited because some technology transfer information is not disclosed, and there is little technology transferred in new technology fields. The technology transfer prediction model proposed in this paper searches for hidden patterns related to technology transfer by imaging the patent information, which can also be applied to image analysis models. Furthermore, augmenting the data can solve the problem of the lack of learning data for technology transfer. To examine whether the proposed model can be used in real industries, we collected patents related to artificial intelligence technology registered in the United States and conducted experiments. The experimental results show that the models trained by imaging patent information performed excellently. Moreover, it was shown that the data augmentation technique can be used when there are insufficient data for technology transfer. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. AI/ML assisted shale gas production performance evaluation.
- Author
-
Syed, Fahad I., Muther, Temoor, Dahaghi, Amirmasoud K., and Negahban, Shahin
- Subjects
SHALE gas ,OIL shales ,ARTIFICIAL intelligence ,SHALE gas reservoirs ,ARTIFICIAL neural networks - Abstract
Shale gas reservoirs are contributing a major role in overall hydrocarbon production, especially in the United States, and due to the intense development of such reservoirs, it is a must thing to learn the productive methods for modeling production and performance evaluation. Consequently, one of the most adopted techniques these days for the sake of production performance analysis is the utilization of artificial intelligence (AI) and machine learning (ML). Hydrocarbon exploration and production is a continuous process that brings a lot of data from sub-surface as well as from the surface facilities. Availability of such a huge data set that keeps on increasing over time enhances the computational capabilities and performance accuracy through AI and ML applications using a data-driven approach. The ML approach can be utilized through supervised and unsupervised methods in addition to artificial neural networks (ANN). Other ML approaches include random forest (RF), support vector machine (SVM), boosting technique, clustering methods, and artificial network-based architecture, etc. In this paper, a systematic literature review is presented focused on the AI and ML applications for the shale gas production performance evaluation and their modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. The Coming of Age of AI/ML in Drug Discovery, Development, Clinical Testing, and Manufacturing: The FDA Perspectives.
- Author
-
Niazi SK
- Subjects
- United States, Humans, United States Food and Drug Administration, Drug Discovery, Precision Medicine, Artificial Intelligence, Machine Learning
- Abstract
Artificial intelligence (AI) and machine learning (ML) represent significant advancements in computing, building on technologies that humanity has developed over millions of years-from the abacus to quantum computers. These tools have reached a pivotal moment in their development. In 2021 alone, the U.S. Food and Drug Administration (FDA) received over 100 product registration submissions that heavily relied on AI/ML for applications such as monitoring and improving human performance in compiling dossiers. To ensure the safe and effective use of AI/ML in drug discovery and manufacturing, the FDA and numerous other U.S. federal agencies have issued continuously updated, stringent guidelines. Intriguingly, these guidelines are often generated or updated with the aid of AI/ML tools themselves. The overarching goal is to expedite drug discovery, enhance the safety profiles of existing drugs, introduce novel treatment modalities, and improve manufacturing compliance and robustness. Recent FDA publications offer an encouraging outlook on the potential of these tools, emphasizing the need for their careful deployment. This has expanded market opportunities for retraining personnel handling these technologies and enabled innovative applications in emerging therapies such as gene editing, CRISPR-Cas9, CAR-T cells, mRNA-based treatments, and personalized medicine. In summary, the maturation of AI/ML technologies is a testament to human ingenuity. Far from being autonomous entities, these are tools created by and for humans designed to solve complex problems now and in the future. This paper aims to present the status of these technologies, along with examples of their present and future applications., Competing Interests: The author reports no conflicts of interest in this work., (© 2023 Niazi.)
- Published
- 2023
- Full Text
- View/download PDF
26. FDA-approved machine learning algorithms in neuroradiology: A systematic review of the current evidence for approval.
- Author
-
Yearley AG, Goedmakers CMW, Panahi A, Doucette J, Rana A, Ranganathan K, and Smith TR
- Subjects
- United States, Humans, United States Food and Drug Administration, Machine Learning, Databases, Factual, Artificial Intelligence, Algorithms
- Abstract
Over the past decade, machine learning (ML) and artificial intelligence (AI) have become increasingly prevalent in the medical field. In the United States, the Food and Drug Administration (FDA) is responsible for regulating AI algorithms as "medical devices" to ensure patient safety. However, recent work has shown that the FDA approval process may be deficient. In this study, we evaluate the evidence supporting FDA-approved neuroalgorithms, the subset of machine learning algorithms with applications in the central nervous system (CNS), through a systematic review of the primary literature. Articles covering the 53 FDA-approved algorithms with applications in the CNS published in PubMed, EMBASE, Google Scholar and Scopus between database inception and January 25, 2022 were queried. Initial searches identified 1505 studies, of which 92 articles met the criteria for extraction and inclusion. Studies were identified for 26 of the 53 neuroalgorithms, of which 10 algorithms had only a single peer-reviewed publication. Performance metrics were available for 15 algorithms, external validation studies were available for 24 algorithms, and studies exploring the use of algorithms in clinical practice were available for 7 algorithms. Papers studying the clinical utility of these algorithms focused on three domains: workflow efficiency, cost savings, and clinical outcomes. Our analysis suggests that there is a meaningful gap between the FDA approval of machine learning algorithms and their clinical utilization. There appears to be room for process improvement by implementation of the following recommendations: the provision of compelling evidence that algorithms perform as intended, mandating minimum sample sizes, reporting of a predefined set of performance metrics for all algorithms and clinical application of algorithms prior to widespread use. This work will serve as a baseline for future research into the ideal regulatory framework for AI applications worldwide., Competing Interests: Declaration of competing interest All research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2023 Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
27. Towards the Politicization of Artificial Intelligence in the EU? External Influences and Internal Dynamics.
- Author
-
POSELIUZHNA, ILONA
- Subjects
ARTIFICIAL intelligence ,POLARIZATION (Economics) ,CIVIL rights - Abstract
Copyright of Yearbook of European Integration / Rocznik Integracji Europejskiej is the property of Faculty of Political Science & Journalism, Adam Mickiewicz University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
28. 人工智能技术在军事情报领域的应用与发展.
- Author
-
赵亚平, 黄 毅, 李 虹, and 孟 杰
- Subjects
- *
MILITARY intelligence , *ARTIFICIAL intelligence , *MILITARY technology , *INTELLIGENCE service , *MILITARY service - Abstract
This paper analyzes and combs the application and research status of artificial intelligence technology in the field. of military intelligence, in order to provide reference for subsequent military intelligence research. This paper summarizes the development and application of artificial intelligence in military intelligence work from the aspects of intelligence analysis and military command decision-making. Based on the intelligence workflow, the military intelligence service model under artificial intelligence technology is analyzed. This paper systematically combs the research and development status of typical projects of intelligent intelligence systems in the United States, and analyzes the key development trends and technical difficulties of artificial intelligence in the military intelligence field. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A LITERATURE REVIEW AND BIBLIOMETRIC ANALYSIS OF MIND AND ARTIFICIAL CONSCIOUSNESS WORLDWIDE OVER THE YEAR 2000 - 2022.
- Author
-
Aziz, Muhammad Aslam Abdul
- Subjects
CONSCIOUS automata ,BIBLIOMETRICS ,ARTIFICIAL intelligence ,INDUSTRY 4.0 ,LITERATURE reviews ,COMPUTER science ,COUNTRIES - Abstract
In the 21st century, as part of the fourth industrial revolution, artificial intelligence (AI), is one of the most important and well-known technology. AI also has made it possible to execute human tasks without the need for humans. However, there is one important concern with consciousness. This is because consciousness is one of the most defining qualities between AI and humans. So, this study presents a literature review and bibliometric analysis of artificial and mind consciousness research around the world from 2000 to 2022 in order to provide researchers and scholars around the world with an overview of the results and trends in artificial and mind consciousness research. A textual query on two databases; Scopus (289 papers), and Web of Science (303 papers) using the term "artificial consciousness" OR "mind consciousness" was performed on 10 June 2022 retrieving 509 scholarly papers from 2000 to 2022 related to artificial and mind consciousness studies for in-depth analysis. Bibliometric analysis were performed using Rstudio software version 4.2.0 and biblioshiniy for bibliometrix to visualize and analyze trends of artificial and mind consciousness research. This bibliometric analysis was analyzed the annual scientific publication growth, the most productive authors, most frequent word has been using, most famous journal name, and which countries has highest collaboration with other country. According to the findings of the analysis, there is significant inconsistency in global trends in annual scientific production, with the number of publications increasing and decreasing. Among all countries, United States (USA) contributed the most publications in the field of artificial and mind consciousness research. According to the findings has show the most relevant authors is Kelley TM (Scopus), and Patel AD (WoS). Moreover, the most relevant journals articles in artificial and mind consciousness studies is Procedia Computer Science (Scopus), and Journal Of Consciousness Studies (WoS). The implication of this study is can help new researchers in this field by providing information on relevant publications and authors to consult when conducting research on this topic. Furthermore, this research helps other researchers understand current trends in this area of study. As a result, the justification for doing this study is to provide the first bibliometric analysis and to fill research gaps in bibliometric studies of artificial consciousness and the mind by providing information in the form of a literature review, overview, and guidelines. [ABSTRACT FROM AUTHOR]
- Published
- 2022
30. Load Forecasting with Machine Learning and Deep Learning Methods.
- Author
-
Cordeiro-Costas, Moisés, Villanueva, Daniel, Eguía-Oller, Pablo, Martínez-Comesaña, Miguel, and Ramos, Sérgio
- Subjects
MACHINE learning ,DEEP learning ,PATTERN recognition systems ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,LOADERS (Machines) - Abstract
Characterizing the electric energy curve can improve the energy efficiency of existing buildings without any structural change and is the basis for controlling and optimizing building performance. Artificial Intelligence (AI) techniques show much potential due to their accuracy and malleability in the field of pattern recognition, and using these models it is possible to adjust the building services in real time. Thus, the objective of this paper is to determine the AI technique that best forecasts electrical loads. The suggested techniques are random forest (RF), support vector regression (SVR), extreme gradient boosting (XGBoost), multilayer perceptron (MLP), long short-term memory (LSTM), and temporal convolutional network (Conv-1D). The conducted research applies a methodology that considers the bias and variance of the models, enhancing the robustness of the most suitable AI techniques for modeling and forecasting the electricity consumption in buildings. These techniques are evaluated in a single-family dwelling located in the United States. The performance comparison is obtained by analyzing their bias and variance by using a 10-fold cross-validation technique. By means of the evaluation of the models in different sets, i.e., validation and test sets, their capacity to reproduce the results and the ability to properly forecast on future occasions is also evaluated. The results show that the model with less dispersion, both in the validation set and test set, is LSTM. It presents errors of −0.02% of nMBE and 2.76% of nRMSE in the validation set and −0.54% of nMBE and 4.74% of nRMSE in the test set. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. What quantifies good primary care in the United States? A review of algorithms and metrics using real-world data.
- Author
-
Wang, Yun, Zheng, Jianwei, Schneberk, Todd, Ke, Yu, Chan, Alexandre, Hu, Tao, Lam, Jerika, Gutierrez, Mary, Portillo, Ivan, Wu, Dan, Chang, Chih-Hung, Qu, Yang, Brown, Lawrence, and Nichol, Michael B.
- Subjects
MEDICAL quality control ,OCCUPATIONAL roles ,AUDITING ,DATA quality ,KEY performance indicators (Management) ,ARTIFICIAL intelligence ,PRIMARY health care ,CONTINUUM of care ,CANCER patients ,PRESUMPTIONS (Law) ,WORKFLOW ,CLINICAL medicine ,DATA security ,ELECTRONIC health records ,ALGORITHMS - Abstract
Primary care physicians (PCPs) play an indispensable role in providing comprehensive care and referring patients for specialty care and other medical services. As the COVID-19 outbreak disrupts patient access to care, understanding the quality of primary care is critical at this unprecedented moment to support patients with complex medical needs in the primary care setting and inform policymakers to redesign our primary care system. The traditional way of collecting information from patient surveys is time-consuming and costly, and novel data collection and analysis methods are needed. In this review paper, we describe the existing algorithms and metrics that use the real-world data to qualify and quantify primary care, including the identification of an individual's likely PCP (identification of plurality provider and major provider), assessment of process quality (for example, appropriate-care-model composite measures), and continuity and regularity of care index (including the interval index, variance index and relative variance index), and highlight the strength and limitation of real world data from electronic health records (EHRs) and claims data in determining the quality of PCP care. The EHR audits facilitate assessing the quality of the workflow process and clinical appropriateness of primary care practices. With extensive and diverse records, administrative claims data can provide reliable information as it assesses primary care quality through coded information from different providers or networks. The use of EHRs and administrative claims data may be a cost-effective analytic strategy for evaluating the quality of primary care. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Predicting Institution Outcomes for Inter Partes Review (IPR) Proceedings at the United States Patent Trial & Appeal Board by Deep Learning of Patent Owner Preliminary Response Briefs.
- Author
-
Sokhansanj, Bahrad A. and Rosen, Gail L.
- Subjects
DEEP learning ,ACTIONS & defenses (Law) ,CONVOLUTIONAL neural networks ,ARTIFICIAL intelligence ,PATENTS ,NATURAL language processing - Abstract
A key challenge for artificial intelligence in the legal field is to determine from the text of a party's litigation brief whether, and why, it will succeed or fail. This paper shows a proof-of-concept test case from the United States: predicting outcomes of post-grant inter partes review (IPR) proceedings for invalidating patents. The objectives are to compare decision-tree and deep learning methods, validate interpretability methods, and demonstrate outcome prediction based on party briefs. Specifically, this study compares and validates two distinct approaches: (1) representing documents with term frequency inverse document frequency (TF-IDF), training XGBoost gradient-boosted decision-tree models, and using SHAP for interpretation. (2) Deep learning of document text in context, using convolutional neural networks (CNN) with attention, and comparing LIME and attention visualization for interpretability. The methods are validated on the task of automatically determining case outcomes from unstructured written decision opinions, and then used to predict trial institution or denial based on the patent owner's preliminary response brief. The results show how interpretable deep learning architecture classifies successful/unsuccessful response briefs on temporally separated training and test sets. More accurate prediction remains challenging, likely due to the fact-specific, technical nature of patent cases and changes in applicable law and jurisprudence over time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. THE SEARCH FOR TIME-SERIES PREDICTABILITY-BASED ANOMALIES.
- Author
-
OSPINA-HOLGUÍN, Javier Humberto and PADILLA-OSPINA, Ana Milena
- Subjects
EVOLUTIONARY computation ,REINFORCEMENT learning ,DIFFERENTIAL evolution ,MARKET timing ,DETERMINISTIC algorithms - Abstract
This paper introduces a new algorithm for exploiting time-series predictability-based patterns to obtain an abnormal return, or alpha, with respect to a given benchmark asset pricing model. The algorithm proposes a deterministic daily market timing strategy that decides between being fully invested in a risky asset or in a risk-free asset, with the trading rule represented by a parametric perceptron. The optimal parameters are sought in-sample via differential evolution to directly maximize the alpha. Successively using two modern asset pricing models and two different portfolio weighting schemes, the algorithm was able to discover an undocumented anomaly in the United States stock market cross-section, both out-of-sample and using small transaction costs. The new algorithm represents a simple and flexible alternative to technical analysis and forecast-based trading rules, neither of which necessarily maximizes the alpha. This new algorithm was inspired by recent insights into representing reinforcement learning as evolutionary computation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Modelos algorítmicos y fact-checking automatizado. Revisión sistemática de la literatura.
- Author
-
García-Marín, David
- Subjects
LINGUISTIC models ,IMAGE analysis ,KRUSKAL-Wallis Test ,ARTIFICIAL intelligence ,SPANISH literature - Abstract
Copyright of Documentación de las Ciencias de la Información is the property of Universidad Complutense de Madrid and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
35. International comparison of cross-disciplinary integration in industry 4.0: A co-authorship analysis using academic literature databases.
- Author
-
Mizukami Y and Nakano J
- Subjects
- Asia, Europe, North America, United States, Artificial Intelligence, Authorship
- Abstract
In innovation strategy, a type of Schumpeterian competitive strategy in business administration, "intra-individual diversity" has attracted attention as one factor for creating innovation. In this study, we redefine "framework for identifying researchers' areas of expertise" as "a framework for quantifying intra-individual diversity among researchers. Note that diversity here refers to authorship of articles in multiple research fields. The application of this framework then made it possible to visualize organizational diversity by accumulating the intra-individual diversity of researchers and to discuss the innovation strategy of the organization. The analysis in this study discusses how countries are promoting research on the topics of artificial intelligence (AI), big data, and Internet of Things (IoT) technologies, which are at the core of Industry 4.0, from an innovation perspective. Note that Industry 4.0 is a technological framework that aims to "improve the efficiency of all social systems," "create new industries," and "increase intellectual productivity." For the analysis, we used 19-year bibliographic data (2000-2018) from the top 20 countries in terms of the number of papers in AI, big data, and IoT technologies. As the results, this study classified the styles of cross-disciplinary fusion into four patterns in AI and three patterns in big data. This study did not consider the results in IoT because of only small differences between countries. Furthermore, regional differences in the style of cross-disciplinary fusion were also observed, and the global innovation patterns in Industry 4.0 were classified into seven categories. In Europe and North America, the cross-disciplinary integration style was similar to that between the United States, Germany, the Netherlands, Spain, England, Italy, Canada, and France. In Asia, the cross-disciplinary fusion style was similar between China, Japan, and South Korea., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2022
- Full Text
- View/download PDF
36. Bibliometric analyses of applications of artificial intelligence on tuberculosis.
- Author
-
Cabanillas-Lazo M, Quispe-Vicuña C, Pascual-Guevara M, Barja-Ore J, Guerrero ME, Munive-Degregori A, and Mayta-Tovalino F
- Subjects
- Humans, United States, Bibliometrics, India, Artificial Intelligence, Tuberculosis
- Abstract
Background: Tuberculosis is one of the leading causes of death worldwide affecting mainly low- and middle-income countries. Therefore, the objective is to analyze the bibliometric characteristics of the application of artificial intelligence (AI) in tuberculosis in Scopus., Methods: A bibliometric study, the Scopus database was used using a search strategy composed of controlled and free terms regarding tuberculosis and AI. The search fields "TITLE," "ABSTRACT," and "AUTHKEY" were used to find the terms. The collected data were analyzed with Scival software. Bibliometric data were described through the figures and tables summarized by absolute values and percentages., Results: Thousand and forty-one documents were collected and analyzed. Yudong Zhang was the author with the highest scientific production; however, K. C. Santosh had the greatest impact. Anna University (India) was the institution with the highest number of published papers. Most papers were published in the first quartile. The United States led the scientific production. Articles with international collaboration had the highest impact., Conclusion: Articles related to tuberculosis and AI are mostly published in first quartile journals, which would reflect the need and interest worldwide. Although countries with a high incidence of new cases of tuberculosis are among the most productive, those with the highest reported drug resistance need greater support and collaboration., Competing Interests: None
- Published
- 2022
- Full Text
- View/download PDF
37. Characteristics, Impact, and Visibility of Scientific Publications on Artificial Intelligence in Dentistry: A Scientometric Analysis.
- Author
-
Velasquez R, Barja-Ore J, Salazar-Salvatierra E, Gutiérrez-Ilave M, Mauricio-Vilchez C, Mendoza R, and Mayta-Tovalino F
- Subjects
- Cross-Sectional Studies, Dentistry, United States, Artificial Intelligence, Bibliometrics
- Abstract
Aim: To analyze the bibliometric characteristics, impact, and visibility of scientific publications on artificial intelligence (AI) in dentistry in Scopus., Materials and Methods: Descriptive and cross-sectional bibliometric study, based on the systematic search of information in Scopus between 2017 and July 10, 2022. The search strategy was elaborated with Medical Subject Headings (MeSH) and Boolean operators. The analysis of bibliometric indicators was performed with Elsevier's SciVal program., Results: From 2017 to 2022, the number of publications in indexed scientific journals increased, especially in the Q1 (56.1%) and Q2 (30.6%) quartile. Among the journals with the highest production, the majority was from the United States and the United Kingdom, and the Journal of Dental Research has the highest impact (14.9 citations per publication) and the most publications (31). In addition, the Charité - Universitätsmedizin Berlin (FWCI: 8.24) and Krois Joachim (FWCI: 10.09) from Germany were the institution and author with the highest expected performance relative to the world average, respectively. The United States is the country with the highest number of published papers., Clinical Significance: There is an increasing tendency to increase the scientific production on artificial intelligence in the field of dentistry, with a preference for publication in prestigious scientific journals of high impact. Most of the productive authors and institutions were from Japan. There is a need to promote and consolidate strategies to develop collaborative research both nationally and internationally.
- Published
- 2022
- Full Text
- View/download PDF
38. Discouraging the Demand That Fosters Sex Trafficking: Collaboration through Augmented Intelligence.
- Author
-
Van der Watt, Marcel
- Subjects
SEX trafficking ,HUMAN trafficking ,NATURAL language processing ,SEX crimes ,CRIMINAL justice system ,ARTIFICIAL intelligence - Abstract
Augmented intelligence—as the fusion of human and artificial intelligence—is effectively being employed in response to a spectrum of risks and crimes that stem from the online sexual exploitation marketplace. As part of a study that was sponsored by the National Institute of Justice, the National Center on Sexual Exploitation has documented 15 tactics that have been used in more than 2650 US cities and counties to deter sex buyers from engaging with prostitution and sex trafficking systems. One of these tactics, technology-based enforcement and deterrence methods, has been used in more than 78 locations in the United States. This paper explores the issue of technology-facilitated trafficking in the online sexual exploitation marketplace and juxtaposes this with the use of augmented intelligence in collaborative responses to these crimes. Illustrative case studies are presented that describe how two organizations employ technology that utilizes the complementary strengths of humans and machines to deter sex buyers at the point of purchase. The human(e) touch of these organizations, combined with artificial intelligence, natural language processing, constructed websites, photos, and mobile technology, show significant potential for operational scaling, and provide a template for consideration by law enforcement agencies, criminal justice systems, and the larger multidisciplinary counter-trafficking community for collaborative replication in other settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. PROFESSIONAL ACTIVITIES.
- Subjects
COMPUTER science conferences ,CONFERENCES & conventions ,INFORMATION technology ,ARTIFICIAL intelligence ,MEETINGS - Abstract
The article presents information on conferences and symposiums related to the field of computer science. Artificial intelligence is the topic of the First Annual Computer Applications Conference to be held from October 31, 1985 to November 1, 1985 in New York. Technical topics to be covered at the Society of Women Engineers 1986 National Convention include: computers (hardware/software); telecommunications; genetic engineering; space; energy; robotics; defense; artificial intelligence; engineering education; manufacturing; productivity; transportation; hazardous waste; semiconductors; and agricultural engineering. The Convention is scheduled to be held from June 25, 1986 to June 29, 1986 in Connecticut. The Conference on Database Directions: Information Resource Management (Making It Work) is scheduled to be held from October 21, 1985 to October 23, 1985, in Fort Lauderdale, Florida. The Conference on Computers and Education is scheduled to be held on October 26, 1985 in Baltimore, Maryland. The meeting is to be held in cooperation with Towson State University, Maryland.
- Published
- 1985
40. RESEARCH TRENDS ON ARCHIVISTS IN SCOPUS-INDEXED JOURNALS.
- Author
-
Aulianto, Dwi Ridho, Riyadi, Slamet, Sinaga, Melinda, Komalasari, Euis, Hermansyah, Dendang, and Maulintuti, Maya
- Subjects
ARCHIVISTS ,CITATION indexes ,BIBLIOMETRICS ,ARTIFICIAL intelligence ,DATABASES ,ARCHIVES ,ACQUISITION of data ,ELECTRONIC data processing - Abstract
This research aims to determine research trends regarding archivists in Scopus-indexed journals. The research method used is bibliometric analysis, with data collection from the Scopus database, carried out on September 8, 2023, using the keyword "Archivist" from 2013 to 2022 and specifically for final publications with journal type. Data is processed and analyzed using Publish or Perish (PoP) and VOSviewer to display visualization results. The research results concluded that the number of publications regarding archivists in the last ten years was 1241 documents. American Archivist became the most dominant publication source by publishing 94 documents. Poole was the most prolific writer, and the United States was the most contributing country by publishing 439 documents. The document type in articles is the largest, with 1024. Social Science is the subject area most often discussed in archivist topics. The total number of publication citations regarding archivists is 4528. Publication trends based on the appearance of a minimum of ten keywords are divided into five clusters, with the most dominant keywords being "metadata treatment, archives, human, digitization, and librarians." The publication trend seen from the latest publication year discusses "artificial intelligence," which has 5 (five) related links: artificial intelligence-copyright, artificial intelligence-photography, artificial intelligence-automation, artificial intelligence-digitization, and artificial intelligence-metadata. [ABSTRACT FROM AUTHOR]
- Published
- 2023
41. Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending.
- Author
-
Kelley, Stephanie, Ovchinnikov, Anton, Hardoon, David R., and Heinrich, Adrienne
- Subjects
SEX discrimination ,ARTIFICIAL intelligence ,ANTI-discrimination laws ,MACHINE learning ,LOANS ,MORTGAGE loans ,FINANCIAL technology - Abstract
Problem definition: We use a realistically large, publicly available data set from a global fintech lender to simulate the impact of different antidiscrimination laws and their corresponding data management and model-building regimes on gender-based discrimination in the nonmortgage fintech lending setting. Academic/practical relevance: Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature-rich, highly multicollinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different antidiscrimination regimes whilst managing profitability. Methodology: We train statistical and ML models on a large and realistically rich publicly available data set to simulate different antidiscrimination regimes and measure their impact on model quality and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results: We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and slightly decrease firm profitability. We observe that ML models are less discriminatory, of better predictive quality, and more profitable compared with traditional statistical models like logistic regression. Unlike omitted variable bias—which drives discrimination in statistical models—ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down sampling the training data to rebalance gender, gender-aware hyperparameter selection, and up sampling the training data to rebalance gender all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination with negligible impact on predictive quality and a slight increase in firm profitability. Managerial implications: A rethink is required of the antidiscrimination laws, specifically with respect to the collection and use of protected attributes for ML models. Firms should be able to collect protected attributes to, at minimum, measure discrimination and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms. History: This paper has been accepted for the Manufacturing & Service Operations Management Special Section on Responsible Research in Operations Management. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.1108. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. A scholarly network of AI research with an information science focus: Global North and Global South perspectives.
- Author
-
Tang KY, Hsiao CH, and Hwang GJ
- Subjects
- Australia, China, United Kingdom, United States, Artificial Intelligence, Technology
- Abstract
This paper primarily aims to provide a citation-based method for exploring the scholarly network of artificial intelligence (AI)-related research in the information science (IS) domain, especially from Global North (GN) and Global South (GS) perspectives. Three research objectives were addressed, namely (1) the publication patterns in the field, (2) the most influential articles and researched keywords in the field, and (3) the visualization of the scholarly network between GN and GS researchers between the years 2010 and 2020. On the basis of the PRISMA statement, longitudinal research data were retrieved from the Web of Science and analyzed. Thirty-two AI-related keywords were used to retrieve relevant quality articles. Finally, 149 articles accompanying the follow-up 8838 citing articles were identified as eligible sources. A co-citation network analysis was adopted to scientifically visualize the intellectual structure of AI research in GN and GS networks. The results revealed that the United States, Australia, and the United Kingdom are the most productive GN countries; by contrast, China and India are the most productive GS countries. Next, the 10 most frequently co-cited AI research articles in the IS domain were identified. Third, the scholarly networks of AI research in the GN and GS areas were visualized. Between 2010 and 2015, GN researchers in the IS domain focused on applied research involving intelligent systems (e.g., decision support systems); between 2016 and 2020, GS researchers focused on big data applications (e.g., geospatial big data research). Both GN and GS researchers focused on technology adoption research (e.g., AI-related products and services) throughout the investigated period. Overall, this paper reveals the intellectual structure of the scholarly network on AI research and several applications in the IS literature. The findings provide research-based evidence for expanding global AI research., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2022
- Full Text
- View/download PDF
43. Artificial intelligence in clinical and translational science: Successes, challenges and opportunities.
- Author
-
Bernstam EV, Shireman PK, Meric-Bernstam F, N Zozus M, Jiang X, Brimhall BB, Windham AK, Schmidt S, Visweswaran S, Ye Y, Goodrum H, Ling Y, Barapatre S, and Becich MJ
- Subjects
- Humans, Translational Research, Biomedical, United States, Artificial Intelligence, Translational Science, Biomedical
- Abstract
Artificial intelligence (AI) is transforming many domains, including finance, agriculture, defense, and biomedicine. In this paper, we focus on the role of AI in clinical and translational research (CTR), including preclinical research (T1), clinical research (T2), clinical implementation (T3), and public (or population) health (T4). Given the rapid evolution of AI in CTR, we present three complementary perspectives: (1) scoping literature review, (2) survey, and (3) analysis of federally funded projects. For each CTR phase, we addressed challenges, successes, failures, and opportunities for AI. We surveyed Clinical and Translational Science Award (CTSA) hubs regarding AI projects at their institutions. Nineteen of 63 CTSA hubs (30%) responded to the survey. The most common funding source (48.5%) was the federal government. The most common translational phase was T2 (clinical research, 40.2%). Clinicians were the intended users in 44.6% of projects and researchers in 32.3% of projects. The most common computational approaches were supervised machine learning (38.6%) and deep learning (34.2%). The number of projects steadily increased from 2012 to 2020. Finally, we analyzed 2604 AI projects at CTSA hubs using the National Institutes of Health Research Portfolio Online Reporting Tools (RePORTER) database for 2011-2019. We mapped available abstracts to medical subject headings and found that nervous system (16.3%) and mental disorders (16.2) were the most common topics addressed. From a computational perspective, big data (32.3%) and deep learning (30.0%) were most common. This work represents a snapshot in time of the role of AI in the CTSA program., (© 2021 The Authors. Clinical and Translational Science published by Wiley Periodicals LLC on behalf of American Society for Clinical Pharmacology and Therapeutics.)
- Published
- 2022
- Full Text
- View/download PDF
44. Methodology for Conducting Post-Marketing Surveillance of Software as a Medical Device Based on Artificial Intelligence Technologies.
- Author
-
Zinchenko VV, Arzamasov KM, Chetverikov SF, Maltsev AV, Novik VP, Akhmad ES, Sharova DE, Andreychenko AE, Vladzymyrskyy AV, and Morozov SP
- Subjects
- United States, Algorithms, Product Surveillance, Postmarketing, Artificial Intelligence, Software
- Abstract
The aim of the study was to develop a methodology for conducting post-registration clinical monitoring of software as a medical device based on artificial intelligence technologies (SaMD-AI)., Materials and Methods: The methodology of post-registration clinical monitoring is based on the requirements of regulatory legal acts issued by the Board of the Eurasian Economic Commission. To comply with these requirements, the monitoring involves submission of the review of adverse events reports, the review of developers' routine reports on the safety and efficiency of SaMD-AI, and the assessment of the system for collecting and analyzing developers' post-registration data on the safety and efficiency of medical devices. The methodology was developed with regard to the recommendations of the International Medical Device Regulators Forum and the documents issued by the Food and Drug Administration (USA). Field-testing of this methodology was carried out using SaMD-AI designed for diagnostic imaging., Results: The post-registration monitoring of SaMD-AI consists of three key stages: collecting user feedback, technical monitoring and clinical validation. Technical monitoring involves routine evaluation of SaMD-AI output data quality to detect and remove flaws in a timely manner, and to secure the product stability. Major outcomes include an ordered list of technical flaws in SaMD-AI and their classification using evidence from diagnostic imaging studies. The application of this methodology resulted in a gradual reduction in the number of studies with flaws due to timely improvements in artificial intelligence algorithms: the number of flaws decreased to 5% in various aspects during subsequent testing. Clinical validation confirmed that SaMD-AI is capable of producing clinically meaningful outputs related to its intended use within the functionality determined by the developer. The testing procedure and the baseline testing framework were established during the field testing., Conclusion: The developed methodology will ensure the safety and efficiency of SaMD-AI taking into account its specifics as intangible medical devices. The methodology presented in this paper can be used by SaMD-AI developers to plan and carry out the post-registration clinical monitoring., Competing Interests: The authors declare no conflicts of interest.
- Published
- 2022
- Full Text
- View/download PDF
45. Social Network and Bibliometric Analysis of Unmanned Aerial Vehicle Remote Sensing Applications from 2010 to 2021.
- Author
-
Wang, Jingrui, Wang, Shuqing, Zou, Dongxiao, Chen, Huimin, Zhong, Run, Li, Hanliang, Zhou, Wei, and Yan, Kai
- Subjects
DRONE aircraft ,REMOTE sensing ,SOCIAL network analysis ,ARTIFICIAL intelligence ,CHINA-United States relations - Abstract
Unmanned Aerial Vehicle (UAV) Remote sensing (RS) has unique advantages over traditional satellite RS, including convenience, high resolution, affordability and fast acquisition speed, making it widely used in many fields. To provide an overview of the development of UAV RS applications during the past decade, we screened related publications from the Web of Science core database from 2010 to 2021, built co-author networks, a discipline interaction network, a keywords timeline view, a co-citation cluster, and detected burst citations using bibliometrics and social network analysis. Our results show that: (1) The number of UAV RS publications had an increasing trend, with explosive growth in the past five years. The number of papers published by China and the United States (US) is far ahead in this field; (2) The US has currently the greatest influence in this field through the largest number of international cooperations. Cooperation is mainly concentrated in countries and institutions with a large number of publications but is not widely distributed. (3) The application of UAV RS involves multiple interdisciplinary subjects, among which "Environmental Science and Ecology" ranks first; (4) Future research trends of UAV RS are expected to be related to artificial intelligence (e.g., artificial neural networks-based research). This paper provides a scientific basis and guidance for future developments of UAV RS applications, which can help the research community to better grasp the developments of this field. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. An interpretable machine learning model of cross-sectional U.S. county-level obesity prevalence using explainable artificial intelligence.
- Author
-
Allen, Ben
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,HEALTH behavior ,OBESITY ,SEDENTARY behavior - Abstract
Background: There is considerable geographic heterogeneity in obesity prevalence across counties in the United States. Machine learning algorithms accurately predict geographic variation in obesity prevalence, but the models are often uninterpretable and viewed as a black-box. Objective: The goal of this study is to extract knowledge from machine learning models for county-level variation in obesity prevalence. Methods: This study shows the application of explainable artificial intelligence methods to machine learning models of cross-sectional obesity prevalence data collected from 3,142 counties in the United States. County-level features from 7 broad categories: health outcomes, health behaviors, clinical care, social and economic factors, physical environment, demographics, and severe housing conditions. Explainable methods applied to random forest prediction models include feature importance, accumulated local effects, global surrogate decision tree, and local interpretable model-agnostic explanations. Results: The results show that machine learning models explained 79% of the variance in obesity prevalence, with physical inactivity, diabetes, and smoking prevalence being the most important factors in predicting obesity prevalence. Conclusions: Interpretable machine learning models of health behaviors and outcomes provide substantial insight into obesity prevalence variation across counties in the United States. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Primary Prevention Trial Designs Using Coronary Imaging: A National Heart, Lung, and Blood Institute Workshop.
- Author
-
Greenland P, Michos ED, Redmond N, Fine LJ, Alexander KP, Ambrosius WT, Bibbins-Domingo K, Blaha MJ, Blankstein R, Fortmann SP, Khera A, Lloyd-Jones DM, Maron DJ, Min JK, Muhlestein JB, Nasir K, Sterling MR, and Thanassoulis G
- Subjects
- Aged, Humans, Maryland, Predictive Value of Tests, Primary Prevention, United States, Artificial Intelligence, National Heart, Lung, and Blood Institute (U.S.)
- Abstract
Coronary artery calcium (CAC) is considered a useful test for enhancing risk assessment in the primary prevention setting. Clinical trials are under consideration. The National Heart, Lung, and Blood Institute convened a multidisciplinary working group on August 26 to 27, 2019, in Bethesda, Maryland, to review available evidence and consider the appropriateness of conducting further research on coronary artery calcium (CAC) testing, or other coronary imaging studies, as a way of informing decisions for primary preventive treatments for cardiovascular disease. The working group concluded that additional evidence to support current guideline recommendations for use of CAC in middle-age adults is very likely to come from currently ongoing trials in that age group, and a new trial is not likely to be timely or cost effective. The current trials will not, however, address the role of CAC testing in younger adults or older adults, who are also not addressed in existing guidelines, nor will existing trials address the potential benefit of an opportunistic screening strategy made feasible by the application of artificial intelligence. Innovative trial designs for testing the value of CAC across the lifespan were strongly considered and represent important opportunities for additional research, particularly those that leverage existing trials or other real-world data streams including clinical computed tomography scans. Sex and racial/ethnic disparities in cardiovascular disease morbidity and mortality, and inclusion of diverse participants in future CAC trials, particularly those based in the United States, would enhance the potential impact of these studies., Competing Interests: Funding Support And Author Disclosures Dr. Bibbins-Domingo was a member and chair of the U.S. Preventive Services Task Force from 2010 to 2017. Dr. Blaha has received funding from the Food and Drug Administration, National Heart, Lung, and Blood Institute, Aetna Foundation, and Amgen Foundation; and has served on the advisory board (honoraria) for Amgen, Sanofi, Regeneron, Novartis, Novo Nordisk, Bayer, and Akcea (all significant except Akcea, modest). Dr. Blankstein has received funding from Amgen Inc. and Astellas Inc.; has served as President of the Society of Cardiovascular Computed Tomography; and is on Board of Directors of the American Society of Preventive Cardiology. Dr. Min has equity in Cleerly; and has served on advisory boards for Arineta and GE Healthcare. Dr. Nasir is supported by the Jerold B. Katz Academy of Translational Research. Dr. Sterling has received funding from the National Heart, Lung, and Blood Institute. Dr. Thanassoulis has received funding from Fonds de Recherche Québec—Santé, Doggone Foundation; has received honoraria/personal fees (advisory boards/speaker fees) from Amgen, Sanofi/Regeneron Pharmaceuticals, HLS Therapeutics, and Boehringer Ingelheim; and has received grants from Ionis Pharmaceuticals and Servier Laboratories outside of the submitted work. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose. The views expressed in this paper are those of the authors and do not represent the official position of the National Institutes of Health, the National Heart, Lung, and Blood Institute, or the U.S. Government. This paper reflects the proceedings of this National Heart, Lung, and Blood Institute workshop and does not represent an official position of the U.S. Preventive Services Task Force., (Copyright © 2021 American College of Cardiology Foundation. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
48. Designing COVID-19 mortality predictions to advance clinical outcomes: Evidence from the Department of Veterans Affairs.
- Author
-
Makridis CA, Strebel T, Marconi V, and Alterovitz G
- Subjects
- Data Display, Humans, Risk Factors, United States, United States Department of Veterans Affairs, Artificial Intelligence, COVID-19 mortality, Models, Statistical, Veterans
- Abstract
Using administrative data on all Veterans who enter Department of Veterans Affairs (VA) medical centres throughout the USA, this paper uses artificial intelligence (AI) to predict mortality rates for patients with COVID-19 between March and August 2020. First, using comprehensive data on over 10 000 Veterans' medical history, demographics and lab results, we estimate five AI models. Our XGBoost model performs the best, producing an area under the receive operator characteristics curve (AUROC) and area under the precision-recall curve of 0.87 and 0.41, respectively. We show how focusing on the performance of the AUROC alone can lead to unreliable models. Second, through a unique collaboration with the Washington D.C. VA medical centre, we develop a dashboard that incorporates these risk factors and the contributing sources of risk, which we deploy across local VA medical centres throughout the country. Our results provide a concrete example of how AI recommendations can be made explainable and practical for clinicians and their interactions with patients., Competing Interests: Competing interests: None declared., (© Author(s) (or their employer(s)) 2021. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.)
- Published
- 2021
- Full Text
- View/download PDF
49. Estimation of COVID-19 Epidemiology Curve of the United States Using Genetic Programming Algorithm.
- Author
-
Anđelić N, Baressi Šegota S, Lorencin I, Jurilj Z, Šušteršič T, Blagojević A, Protić A, Ćabov T, Filipović N, and Car Z
- Subjects
- Humans, United States epidemiology, Algorithms, Artificial Intelligence, COVID-19 epidemiology, Pandemics
- Abstract
Estimation of the epidemiology curve for the COVID-19 pandemic can be a very computationally challenging task. Thus far, there have been some implementations of artificial intelligence (AI) methods applied to develop epidemiology curve for a specific country. However, most applied AI methods generated models that are almost impossible to translate into a mathematical equation. In this paper, the AI method called genetic programming (GP) algorithm is utilized to develop a symbolic expression (mathematical equation) which can be used for the estimation of the epidemiology curve for the entire U.S. with high accuracy. The GP algorithm is utilized on the publicly available dataset that contains the number of confirmed, deceased and recovered patients for each U.S. state to obtain the symbolic expression for the estimation of the number of the aforementioned patient groups. The dataset consists of the latitude and longitude of the central location for each state and the number of patients in each of the goal groups for each day in the period of 22nd January 2020-3rd December 2020. The obtained symbolic expressions for each state are summed up to obtain symbolic expressions for estimation of each of the patient groups (confirmed, deceased and recovered). These symbolic expressions are combined to obtain the symbolic expression for the estimation of the epidemiology curve for the entire U.S. The obtained symbolic expressions for the estimation of the number of confirmed, deceased and recovered patients for each state achieved R2 score in the ranges 0.9406-0.9992, 0.9404-0.9998 and 0.9797-0.99955, respectively. These equations are summed up to formulate symbolic expressions for the estimation of the number of confirmed, deceased and recovered patients for the entire U.S. with achieved R2 score of 0.9992, 0.9997 and 0.9996, respectively. Using these symbolic expressions, the equation for the estimation of the epidemiology curve for the entire U.S. is formulated which achieved R2 score of 0.9933. Investigation showed that GP algorithm can produce symbolic expressions for the estimation of the number of confirmed, recovered and deceased patients as well as the epidemiology curve not only for the states but for the entire U.S. with very high accuracy.
- Published
- 2021
- Full Text
- View/download PDF
50. Bibliometric research on the developments of artificial intelligence in radiomics toward nervous system diseases.
- Author
-
Jiangli Cui, Xingyu Miao, Xiaoyu Yanghao, and Xuqiu Qin
- Subjects
NEUROLOGICAL disorders ,RADIOMICS ,ARTIFICIAL intelligence ,BIBLIOMETRICS ,RESEARCH & development ,HEAT stroke - Abstract
Background: The growing interest suggests that the widespread application of radiomics has facilitated the development of neurological disease diagnosis, prognosis, and classification. The application of artificial intelligence methods in radiomics has increasingly achieved outstanding prediction results in recent years. However, there are few studies that have systematically analyzed this field through bibliometrics. Our destination is to study the visual relationships of publications to identify the trends and hotspots in radiomics research and encourage more researchers to participate in radiomics studies. Methods: Publications in radiomics in the field of neurological disease research can be retrieved from the Web of Science Core Collection. Analysis of relevant countries, institutions, journals, authors, keywords, and references is conducted using Microsoft Excel 2019, VOSviewer, and CiteSpace V. We analyze the research status and hot trends through burst detection. Results: On October 23, 2022, 746 records of studies on the application of radiomics in the diagnosis of neurological disorders were retrieved and published from 2011 to 2023. Approximately half of them were written by scholars in the United States, and most were published in Frontiers in Oncology, European Radiology, Cancer, and SCIENTIFIC REPORTS. Although China ranks first in the number of publications, the United States is the driving force in the field and enjoys a good academic reputation. NORBERT GALLDIKS and JIE TIAN published the most relevant articles, while GILLIES RJ was cited the most. RADIOLOGY is a representative and influential journal in the field. "Glioma" is a current attractive research hotspot. Keywords such as "machine learning," "brain metastasis," and "gene mutations" have recently appeared at the research frontier. Conclusion: Most of the studies focus on clinical trial outcomes, such as the diagnosis, prediction, and prognosis of neurological disorders. The radiomics biomarkers and multi-omics studies of neurological disorders may soon become a hot topic and should be closely monitored, particularly the relationship between tumor-related non-invasive imaging biomarkers and the intrinsic micro- environment of tumors. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.