384 results
Search Results
2. The Rise of the AI Co-Pilot: Lessons for Design from Aviation and Beyond.
- Author
-
Sellen, Abigail and Horvitz, Eric
- Subjects
- *
HUMAN-artificial intelligence interaction , *AEROSPACE industries , *AUTOMATION , *ERGONOMICS , *CRITICAL thinking , *TRUST , *VIGILANCE (Psychology) - Abstract
This article takes insight from the aviation industry to present a proposal for the future of human-artificial intelligence (AI) interaction. The article discusses human factors engineering in aviation and Lisanne Bainbridge’s 1983 paper “Ironies of Automation” with an emphasis on human monitoring of artificial intelligence systems and de-skilling as a result of automation. The article concludes with a discussion on the future of human-AI interaction including human agency and critical thinking skills.
- Published
- 2024
- Full Text
- View/download PDF
3. Personalised Reranking of Paper Recommendations Using Paper Content and User Behavior.
- Author
-
XINYI LI, YIFAN CHEN, PETTIT, BENJAMIN, and DE RIJKE, MAARTEN
- Subjects
- *
SEARCH engines , *EMAIL , *NEWSLETTERS , *RESEARCH , *MANUFACTURING processes - Abstract
Academic search engines have been widely used to access academic papers, where users' information needs are explicitly represented as search queries. Some modern recommender systems have taken one step further by predicting users' information needswithout the presence of an explicit query. In this article,we examine an academic paper recommender that sends out paper recommendations in email newsletters, based on the users' browsing history on the academic search engine. Specifically, we look at users who regularly browse papers on the search engine, and we sign up for the recommendation newsletters for the first time. We address the task of reranking the recommendation candidates that are generated by a production system for such users. We face the challenge that the users on whom we focus have not interacted with the recommender system before, which is a common scenario that every recommender system encounters when new users sign up. We propose an approach to reranking candidate recommendations that utilizes both paper content and user behavior. The approach is designed to suit the characteristics unique to our academic recommendation setting. For instance, content similarity measures can be used to find the closest match between candidate recommendations and the papers previously browsed by the user. To this end, we use a knowledge graph derived from paper metadata to compare entity similarities (papers, authors, and journals) in the embedding space. Since the users on whom we focus have no prior interactions with the recommender system, we propose a model to learn a mapping from users' browsed articles to user clicks on the recommendations. We combine both content and behavior into a hybrid reranking model that outperforms the production baseline significantly, providing a relative 13% increase in Mean Average Precision and 28% in Precision@1. Moreover, we provide a detailed analysis of the model components, highlighting where the performance boost comes from. The obtained insights reveal useful components for the reranking process and can be generalized to other academic recommendation settings as well, such as the utility of graph embedding similarity. Also, recent papers browsed by users provide stronger evidence for recommendation than historical ones. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
4. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
- *
ALGEBRA , *POLYNOMIALS , *CIRCUIT complexity , *ALGORITHMS , *DIRECTED acyclic graphs , *LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F2), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
- *
ALGORITHMS , *SYSTEMS design , *CYBER physical systems , *COMPUTER scheduling , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Data Quality May Be All You Need.
- Author
-
Edwards, Chris
- Subjects
- *
DATA quality , *LANGUAGE models , *MACHINE learning , *OPEN source software , *INFORMATION filtering systems - Abstract
The article discusses data quality and considers the building and scaling of data through open-source models. It mentions information duplication, degraded model quality, and data filtering in large language models (LLMs). Several research papers are cited including one titled "Textbooks are all you need" which focuses on Phi1, a transformer-based model which synthetically generates high-quality textbooks using output from sources such as GPT-3.5.
- Published
- 2024
- Full Text
- View/download PDF
7. Optimal Re-Materialization Strategies for Heterogeneous Chains: How to Train Deep Neural Networks with Limited Memory.
- Author
-
BEAUMONT, OLIVIER, EYRAUD-DUBOIS, LIONEL, HERRMANN, JULIEN, JOLY, ALEXIS, and SHILOVA, ALENA
- Subjects
- *
ARTIFICIAL neural networks , *AUTOMATIC differentiation , *MEMORY , *DATA warehousing - Abstract
Training in Feed Forward Deep Neural Networks is a memory-intensive operation which is usually performed on GPUs with limited memory capacities. This may force data scientists to limit the depth of the models or the resolution of the input data if data does not fit in the GPU memory. The re-materialization technique, whose idea comes from the checkpointing strategies developed in the Automatic Differentiation literature, allows data scientists to limit the memory requirements related to the storage of intermediate data (activations), at the cost of an increase in the computational cost. This paper introduces a new strategy of re-materialization of activations that significantly reduces memory usage. It consists in selecting which activations are saved and which activations are deleted during the forward phase and then recomputing the deleted activations when they are needed during the backward phase. We propose an original computation model that combines two types of activation savings: either only storing the layer inputs or recording the complete history of operations that produced the outputs. This paper focuses on the fully heterogeneous case, where the computation time and the memory requirement of each layer is different. We prove that finding the optimal solution is NP-hard and that classical techniques from Automatic Differentiation literature do not apply. Moreover, the classical assumption of memory persistence of materialized activations, used to simplify the search of optimal solutions, does not hold anymore. Thus, we propose a weak memory persistence property and provide a dynamic program to compute the optimal sequence of computations. This algorithm is made available through the Rotor software, a PyTorch plug-in dealing with any network consisting of a sequence of layers, each of them having an arbitrarily complex structure. Through extensive experiments, we show that our implementation consistently outperforms existing re-materialization approaches for a large class of networks, image sizes, and batch sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Rebuttal How-To: Strategies, Tactics, and the Big Picture in Research: Demystifying rebuttal writing.
- Author
-
Danfeng (Daphne) Yao
- Subjects
- *
SCHOLARLY peer review ,RESEARCH evaluation - Abstract
The article offers suggestions for writing a rebuttal when one has submitted a paper to an academic conference and has received a response indicating that revisions are required for the possible acceptance of the paper.
- Published
- 2024
- Full Text
- View/download PDF
9. Are CS Conferences (Too) Closed Communities? Assessing whether newcomers have a more difficult time achieving paper acceptance at established conferences.
- Author
-
Cabot, Jordi, Cánovas Izquierdo, Javier Luis, and Cosentino, Valerio
- Subjects
- *
COMPUTER science conferences , *SOCIAL closure , *SCHOLARLY peer review , *INGROUPS (Social groups) , *MENTORING - Abstract
The article discusses research on whether computer science (CS) conferences are open to papers from outsiders. Topics include the notion of CS conferences as closed communities in relation to shared cultures, processes for review of paper proposals, and the possibility of mentoring programs for young researchers.
- Published
- 2018
- Full Text
- View/download PDF
10. Boosting Fuzzer Efficiency: An Information Theoretic Perspective.
- Author
-
Böhme, Marcel, Manès, Valentin J. M., and Sang Kil Cha
- Subjects
- *
ENTROPY (Information theory) , *UNCERTAINTY (Information theory) , *COMPUTER software testing , *INFORMATION theory - Abstract
This article discusses the concept of fuzzing as a learning process, using Shannon's entropy to quantify the efficiency of a fuzzer in discovering new behaviors of a program. The authors propose an entropy-based power schedule called "Entropic" for greybox fuzzing, assigning more energy to seeds that reveal more information about a program's behaviors. This approach is implemented in the popular greybox fuzzer LibFuzzer and has been integrated into Google and Microsoft's fuzzing platforms. The paper highlights that the efficiency of a fuzzer is determined by the average information each generated input reveals about a program's behaviors. The authors conducted experiments with over 250 open-source programs, demonstrating a substantial improvement in efficiency and confirming their hypothesis that an efficient fuzzer maximizes information.
- Published
- 2023
- Full Text
- View/download PDF
11. Bringing Industry Back to Conferences, and Paying for Results.
- Author
-
Patterson, David and Bugayenko, Yegor
- Subjects
- *
COMPUTER science conferences , *CONFERENCE papers , *COMPUTER programming , *LABOR productivity , *TELECOMMUTING - Abstract
The article presents blog postings on the topics of scientific papers at conferences and the problem of monitoring and paying for productivity from telecommuting computer programmers.
- Published
- 2020
- Full Text
- View/download PDF
12. Ethical Considerations in Network Measurement Papers.
- Author
-
PARTRIDGE, CRAIG and ALLMAN, MARK
- Subjects
- *
COMPUTER networks , *RESEARCH ethics , *EVALUATION methodology , *EXPERIMENTAL ethics - Abstract
The article discusses the need for ethical considerations in computer network measurement papers so authors can justify ethical foundations for experimental methodologies. It covers the evolution of the field of ethics, the ability to extract information from measurement data, and legal issues related to network measurement.
- Published
- 2016
- Full Text
- View/download PDF
13. Evolving Knowledge Graph Representation Learning with Multiple Attention Strategies for Citation Recommendation System.
- Author
-
Liu, Jhih-Chen, Chen, Chiao-Ting, Lee, Chi, and Huang, Szu-Hao
- Subjects
- *
KNOWLEDGE graphs , *KNOWLEDGE representation (Information theory) , *REPRESENTATIONS of graphs , *RECOMMENDER systems , *ARTIFICIAL intelligence - Abstract
The growing number of publications in the field of artificial intelligence highlights the need for researchers to enhance their efficiency in searching for relevant articles. Most paper recommendation models either rely on simplistic citation relationships among papers or focus on content-based approaches, both of which overlook interactions within academic networks. To address the aforementioned problem, knowledge graph embedding (KGE) methods have been used for citation recommendations because recent research proves that graph representations can effectively improve recommendation model accuracy. However, academic networks are dynamic, leading to changes in the representations of users and items over time. The majority of KGE-based citation recommendations are primarily designed for static graphs, thus failing to capture the evolution of dynamic knowledge graph (DKG) structures. To address these challenges, we introduced the evolving knowledge graph embedding (EKGE) method. In this methodology, evolving knowledge graphs are input into time-series models to learn the patterns of structural evolution. The model has the capability to generate embeddings for each entity at various time points, thereby overcoming limitation of static models that require retraining to acquire embeddings at each specific time point. To enhance the efficiency of feature extraction, we employed a multiple attention strategy. This helped the model find recommendation lists that are closely related to a user's needs, leading to improved recommendation accuracy. Various experiments conducted on a citation recommendation dataset revealed that the EKGE model exhibits a 1.13% increase in prediction accuracy compared to other KGE methods. Moreover, the model's accuracy can be further increased by an additional 0.84% through the incorporation of an attention mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Towards a Greener and Fairer Transportation System: A Survey of Route Recommendation Techniques.
- Author
-
MAKHDOMI, AQSA ASHRAF and GILLANI, IQRA ALTAF
- Subjects
- *
SUSTAINABLE transportation , *URBAN transportation , *RECOMMENDER systems , *RIDESHARING services , *CITY dwellers , *CITIES & towns - Abstract
In recent years, ride-hailing services have emerged as a popular means of transportation for the residents of urban areas. There is an inequality in the spatio-temporal distribution of demand and supply, which requires the proper recommendation of routes to drivers in order to guide them towards riders optimally. This paper provides a review of different route recommendation strategies that have been applied in ride-hailing platforms with the main focus on fairness, and environmental issues. It is important to consider the environmental aspects of route recommendation systems as the transportation sector is one of the major sources of air pollution and has reduced the life expectancy of people around the globe. Moreover, there is an unfair distribution of resources and opportunities among the drivers and riders of the platform which has affected their long-term sustainability in the market. In this paper, we highlight the critical challenges and opportunities inherent in the design of green and fair route recommendation systems and indicate some possible directions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Research for Practice: OS Scheduling: Better scheduling policies for modern computing systems.
- Author
-
KAFFES, KOSTIS
- Subjects
- *
COMPUTER operating systems , *SCHEDULING software , *CENTRAL processing units , *LOAD balancing (Computer networks) , *LINUX operating systems - Abstract
This article discusses operating systems (OS) scheduling with a look at three current papers discussing this topic selected and reviewed by Kostis Kaffes. The cited papers detail OS scheduling in regards to performance, extensibility, and policy choice with a look at the trade off between latency and utilization in scheduling, implementation of arbitrary scheduling policies, and policy choice on an application-by-application basis.
- Published
- 2023
- Full Text
- View/download PDF
16. ACM Europe Council's Best Paper Awards.
- Author
-
JORGE, JOAQUIM, GLENCROSS, MASHHUDA, and QUIGLEY, AARON
- Subjects
- *
COMPUTER science research , *RESEARCH papers (Students) , *COMPUTER science conferences - Abstract
The authors report on awards given by the Association for Computing Machinery (ACM) for student research papers in computer science. They mention the number of awards presented by the ACM Europe Council, the aim of the awards to foster and recognize excellence in computer science, and both short- and long-term impacts of the awards.
- Published
- 2019
- Full Text
- View/download PDF
17. DIAMetrics: Benchmarking Query Engines at Scale.
- Author
-
Deep, Shaleen, Gruenheid, Anja, Nagaraj, Kruthi, Naito, Hiro, Naughton, Jeff, and Viglas, Stratis
- Subjects
- *
BENCHMARKING (Management) , *SEARCH engines , *SOFTWARE measurement , *WEBOMETRICS , *PROGRAM transformation - Abstract
This paper introduces DIAMetrics: a novel framework for end-to-end benchmarking and performance monitoring of query engines. DIAMetrics consists of a number of components supporting tasks such as automated workload summarization, data anonymization, benchmark execution, monitoring, regression identification, and alerting. The architecture of DIAMetrics is highly modular and supports multiple systems by abstracting their implementation details and relying on common canonical formats and pluggable software drivers. The end result is a powerful unified framework that is capable of supporting every aspect of benchmarking production systems and workloads. DIAMetrics has been developed in Google and is being used to benchmark various internal query engines. In this paper, we give an overview of DIAMetrics and discuss its design and implementation. Furthermore, we provide details about its deployment and example use cases. Given the variety of supported systems and use cases within Google, we argue that its core concepts can be used more widely to enable comparative end-to-end benchmarking in other industrial environments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. A Survey on Evaluation of Large Language Models.
- Author
-
YUPENG CHANG, XU WANG, JIN DONG WANG, YUAN WU, LINYI YANG, KAIJ IE ZHU, HAO CHEN, XIAOYUAN YI, CUNXIANG WANG, YIDONG WANG, WEI YE, YUE ZHANG, YI CHANG, YU, PHILIP S., QIANG YANG, and XING XIE
- Subjects
- *
LANGUAGE models , *NATURAL language processing , *TASK analysis , *RESEARCH personnel , *EVALUATION methodology - Abstract
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other areas. Secondly, we answer the 'where' and 'how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing the performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Research for Practice: The Fun in Fuzzing.
- Author
-
NAGY, STEFAN
- Subjects
- *
ANOMALY detection (Computer security) , *DEBUGGING , *COMPUTER security vulnerabilities , *COMPUTER software testing - Abstract
The article presents information on software fuzzing, a process used for automated bug and vulnerability testing. Several papers are cited and discussed by Stefan Nagy, an assistant professor at the University of Utah. Software fuzzing uses new and unexpected inputs which are systematically generated in order to test programs. The cited papers provide information on topics such as specific classes of bugs and finding bugs in compilers, demonstrating the advantage of combining traditional testing methods with innovative techniques and machine learning in order to identify bugs in complex software systems.
- Published
- 2023
- Full Text
- View/download PDF
20. Conceptual Modeling: Topics, Themes, and Technology Trends.
- Author
-
STOREY, VEDA C., LUKYANENKO, ROMAN, and CASTELLANOS, ARTURO
- Subjects
- *
CONCEPTUAL models , *TECHNOLOGICAL innovations , *INFORMATION technology , *SYSTEMS development , *INFORMATION storage & retrieval systems , *ELECTRONIC journals - Abstract
Conceptual modeling is an important part of information systems development and use that involves identifying and representing relevant aspects of reality. Although the past decades have experienced continuous digitalization of services and products that impact business and society, conceptual modeling efforts are still required to support new technologies as they emerge. This paper surveys research on conceptual modeling over the past five decades and shows how its topics and trends continue to evolve to accommodate emerging technologies, while remaining grounded in basic constructs. We survey over 5,300 papers that address conceptual modeling topics from the 1970s to the present, which are collected from 35 multidisciplinary journals and conferences, and use them as the basis from which to analyze the progression of conceptual modeling. The important role that conceptual modeling should play in our evolving digital world is discussed, and future research directions proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Incentive Mechanisms in Peer-to-Peer Networks -- A Systematic Literature Review.
- Author
-
IHLE, CORNELIUS, TRAUTWEIN, DENNIS, SCHUBOTZ, MORITZ, MEUSCHKE, NORMAN, and GIPP, BELA
- Subjects
- *
INCENTIVE (Psychology) , *LITERATURE reviews , *EVIDENCE gaps , *COMPUTER network security , *REPUTATION - Abstract
Centralized networks inevitably exhibit single points of failure that malicious actors regularly target. Decentralized networks are more resilient if numerous participants contribute to the network's functionality. Most decentralized networks employ incentive mechanisms to coordinate the participation and cooperation of peers and thereby ensure the functionality and security of the network. This article systematically reviews incentive mechanisms for decentralized networks and networked systems by covering 165 prior literature reviews and 178 primary research papers published between 1993 and October 2022. Of the considered sources, we analyze 11 literature reviews and 105 primary research papers in detail by categorizing and comparing the distinctive properties of the presented incentive mechanisms. The reviewed incentive mechanisms establish fairness and reward participation and cooperative behavior. We review work that substitutes central authority through independent and subjective mechanisms run in isolation at each participating peer and work that applies multiparty computation. We use monetary, reputation, and service rewards as categories to differentiate the implementations and evaluate each incentive mechanism's data management, attack resistance, and contribution model. Further, we highlight research gaps and deficiencies in reproducibility and comparability. Finally, we summarize our assessments and provide recommendations to apply incentive mechanisms to decentralized networks that share computational resources. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI.
- Author
-
NAUTA, MEIKE, TRIENES, JAN, PATHAK, SHREYASI, NGUYEN, ELISA, PETERS, MICHELLE, SCHMITT, YASMIN, SCHLÖTTERER, JÖRG, VAN KEULEN, MAURICE, and SEIFERT, CHRISTIN
- Subjects
- *
EVALUATION methodology , *ARTIFICIAL intelligence , *QUANTITATIVE research , *MACHINE learning - Abstract
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing black boxes raised the question of how to evaluate explanations of machine learning (ML) models. While interpretability and explainability are often presented as a subjectively validated binary property, we consider it a multifaceted concept. We identify 12 conceptual properties, such as Compactness and Correctness, that should be evaluated for comprehensively assessing the quality of an explanation. Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the past 7 years at major AI and ML conferences that introduce an XAI method. We find that one in three papers evaluate exclusively with anecdotal evidence, and one in five papers evaluate with users. This survey also contributes to the call for objective, quantifiable evaluation methods by presenting an extensive overview of quantitative XAI evaluation methods. Our systematic collection of evaluation methods provides researchers and practitioners with concrete tools to thoroughly validate, benchmark, and compare new and existing XAImethods. The Co-12 categorization scheme and our identified evaluation methods open up opportunities to include quantitative metrics as optimization criteria during model training to optimize for accuracy and interpretability simultaneously. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Machine Learning Applications in Internet-of-Drones: Systematic Review, Recent Deployments, and Open Issues.
- Author
-
HEIDARI, ARASH, NAVIMIPOUR, NIMA JAFARI, UNAL, MEHMET, and GUODAO ZHANG
- Subjects
- *
MACHINE learning , *DEEP learning , *CONVOLUTIONAL neural networks , *PYTHON programming language , *AGRICULTURE , *SECURITY management - Abstract
Deep Learning (DL) and Machine Learning (ML) are effectively utilized in various complicated challenges in healthcare, industry, and academia. The Internet of Drones (IoD) has lately cropped up due to high adjustability to a broad range of unpredictable circumstances. In addition, Unmanned Aerial Vehicles (UAVs) could be utilized efficiently in a multitude of scenarios, including rescue missions and search, farming, mission-critical services, surveillance systems, and so on, owing to technical and realistic benefits such as low movement, the capacity to lengthen wireless coverage zones, and the ability to attain places unreachable to human beings. In many studies, IoD and UAV are utilized interchangeably. Besides, drones enhance the efficiency aspects of various network topologies, including delay, throughput, interconnectivity, and dependability. Nonetheless, the deployment of drone systems raises various challenges relating to the inherent unpredictability of the wireless medium, the high mobility degrees, and the battery life that could result in rapid topological changes. In this paper, the IoD is originally explained in terms of potential applications and comparative operational scenarios. Then, we classify ML in the IoD-UAV world according to its applications, including resource management, surveillance and monitoring, object detection, power control, energy management, mobility management, and security management. This research aims to supply the readers with a better understanding of (1) the fundamentals of IoD/UAV, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that need further investigation and consideration. The resultssuggest that the Convolutional Neural Networks (CNN) method is the most often employed ML method in publications. According to research, most papers are on resource and mobility management. Most articles have focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. Also, Python is the most commonly used language in papers, accounting for 90% of the time. Also, in 2021, it has the most papers published. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. VesNet: A Vessel Network for Jointly Learning Route Pattern and Future Trajectory.
- Author
-
Jiang, Fenyu, Wang, Huandong, and Li, Yong
- Subjects
- *
TRAFFIC monitoring , *MOVEMENT sequences , *IMMUNOCOMPUTERS - Abstract
Vessel trajectory prediction is the key to maritime applications such as traffic surveillance, collision avoidance, anomaly detection, and so on. Making predictions more precisely requires a better understanding of the moving trend for a particular vessel since the movement is affected by multiple factors like marine environment, vessel type, and vessel behavior. In this paper, we propose a model named VesNet, based on the attentional seq2seq framework, to predict vessel future movement sequence by observing the current trajectory. Firstly, we extract the route patterns from the raw AIS data during preprocessing. Then, we design a multi-task learning structure to learn how to implement route pattern classification and vessel trajectory prediction simultaneously. By comparing with representative baseline models, we find that our VesNet has the best performance in terms of long-term prediction precision. Additionally, VesNet can recognize the route pattern by capturing the implicit moving characteristics. The experimental results prove that the proposed multi-task learning assists the vessel trajectory prediction mission. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A LAPACK Implementation of the Dynamic Mode Decomposition.
- Author
-
DRMAČ, ZLATKO
- Subjects
- *
NUMERICAL solutions for linear algebra , *NONLINEAR dynamical systems , *SINGULAR value decomposition , *SOFTWARE reliability , *NONLINEAR analysis , *LATENT variables - Abstract
The Dynamic Mode Decomposition (DMD) is a method for computational analysis of nonlinear dynamical systems in data driven scenarios. Based on high fidelity numerical simulations or experimental data, the DMD can be used to reveal the latent structures in the dynamics or as a forecasting or a model order reduction tool. The theoretical underpinning of the DMD is the Koopman operator on a Hilbert space of observables of the dynamics under study. This paper describes a numerically robust and versatile variant of the DMD and its implementation using the state-of-the-art dense numerical linear algebra software package LAPACK. The features of the proposed software solution include residual bounds for the computed eigenpairs of the DMD matrix, eigenvectors refinements and computation of the eigenvectors of the Exact DMD, compressed DMD for efficient analysis of high dimensional problems that can be easily adapted for fast updates in a streaming DMD. Numerical analysis is the bedrock of numerical robustness and reliability of the software, that is tested following the highest standards and practices of LAPACK. Important numerical topics are discussed in detail and illustrated using numerous numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Algorithm 1040: The Sparse Grids Matlab Kit - a Matlab implementation of sparse grids for high-dimensional function approximation and uncertainty quantification.
- Author
-
PIAZZOLA, CHIARA and TAMELLINI, LORENZO
- Subjects
- *
ALGORITHMS , *MATHEMATICAL forms - Abstract
The Sparse Grids Matlab Kit provides a Matlab implementation of sparse grids, and can be used for approximating high-dimensional functions and, in particular, for surrogate-model-based uncertainty quantification. It is lightweight, high-level and easy to use, good for quick prototyping and teaching; however, it is equipped with some features that allow its use also in realistic applications. The goal of this paper is to provide an overview of the data structure and of the mathematical aspects forming the basis of the software, as well as comparing the current release of our package to similar available software. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Hermitian Dynamic Mode Decomposition - Numerical Analysis and Software Solution.
- Author
-
DRMAČ, ZLATKO
- Subjects
- *
NUMERICAL solutions to differential equations , *NUMERICAL solutions for linear algebra , *DYNAMICAL systems , *PERTURBATION theory , *COMPUTATIONAL fluid dynamics - Abstract
The Dynamic Mode Decomposition (DMD) is a versatile and increasingly popular method for data driven analysis of dynamical systems that arise in a variety of applications in, e.g., computational fluid dynamics, robotics or machine learning. In the framework of numerical linear algebra, it is a data driven Rayleigh-Ritz procedure applied to a DMD matrix that is derived from the supplied data. In some applications, the physics of the underlying problem implies hermiticity of the DMD matrix, so the general DMD procedure is not computationally optimal. Furthermore, it does not guarantee important structural properties of the Hermitian eigenvalue problem and may return non-physical solutions. This paper proposes a software solution to the Hermitian (including the real symmetric) DMD matrices, accompanied with a numerical analysis that contains several fine and instructive numerical details. The eigenpairs are computed together with their residuals, and perturbation theory provides error bounds for the eigenvalues and eigenvectors. The software is developed and tested using the LAPACK package. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A Survey on Graph Representation Learning Methods.
- Author
-
KHOSHRAFTAR, SHIMA and AIJUN AN
- Subjects
- *
REPRESENTATIONS of graphs , *GRAPH algorithms - Abstract
Graph representation learning has been a very active research area in recent years. The goal of graph representation learning is to generate graph representation vectors that capture the structure and features of large graphs accurately. This is especially important because the quality of the graph representation vectors will affect the performance of these vectors in downstream tasks such as node classification, link prediction and anomaly detection. Many techniques have been proposed for generating effective graph representation vectors, which generally fall into two categories: traditional graph embedding methods and graph neural network (GNN)–based methods. These methods can be applied to both static and dynamic graphs. A static graph is a single fixed graph, whereas a dynamic graph evolves over time and its nodes and edges can be added or deleted from the graph. In this survey, we review the graph-embedding methods in both traditional and GNN-based categories for both static and dynamic graphs and include the recent papers published until the time of submission. In addition, we summarize a number of limitations of GNNs and the proposed solutions to these limitations. Such a summary has not been provided in previous surveys. Finally, we explore some open and ongoing research directions for future work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Explicit State Representation Guided Video-based Pedestrian Attribute Recognition.
- Author
-
WEI-QING LU, HAI-MIAO HU, JINZUO YU, SHIFENG ZHANG, and HANZI WANG
- Subjects
- *
PEDESTRIANS , *RECOGNITION (Psychology) , *JUDGMENT (Psychology) , *RECOGNITION (Philosophy) - Abstract
The pedestrian attribute recognition aims to generate a structured description of pedestrians, which serves an important role in surveillance. Current works usually assume that the images and the specific pedestrian states, including pedestrian occlusion and pedestrian orientation, are given. However, we argue that the current works ignore the guidance of the pedestrian state and cannot achieve the appropriate performance since the appearance feature will become unreliable due to the variance of the pedestrian state, which is common in practice. Therefore, this paper proposes the Explicit State Representation (ExSR) Guided Pedestrian Attribute Recognition to improve the accuracy through state learning and attribute fusion among frames. Firstly, the pedestrian state is explicitly represented by concatenating the pedestrian orientation and occlusion, which can be accurately determined via analyzing the pose. Secondly, the state-aware pedestrian attribute fusion method is proposed and divided into two cases, namely the inter-state case and the intra-state case. In the intra-state case, the appearance feature will remain stable and the attribute relations are propagated to refine. The method of exploiting attribute relations within a single frame is the Graph Neural Network. In the inter-state case, the state changes, the attribute relationship propagation is prevented, and the advantages of attribute recognition in each frame are complemented to make a reliable judgment on the invisible region. The experimental results demonstrate that the ExSR outperforms the state-of-the-art methods on two public databases, benefiting from the explicit introduction of the state into the attribute recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Explainable Goal-driven Agents and Robots - A Comprehensive Review.
- Author
-
SADO, FATAI, CHU KIONG LOO, WEI SHIUNG LIEW, and KERZEL, MATTHIAS
- Subjects
- *
GOAL (Psychology) , *ARTIFICIAL intelligence , *INTELLIGENT agents , *DEEP learning , *AUTONOMOUS robots , *ROBOTS , *ROAD maps - Abstract
Recent applications of autonomous agents and robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are ‘black boxes’, which renders their choices or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches to eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are sparse at this point in time. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents’ perceptual functions (e.g., senses, vision) and cognitive reasoning (e.g., beliefs, desires, intentions, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a road map for the possible realization of effective goal-driven explainable agents and robots. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. A Survey of User Perspectives on Security and Privacy in a Home Networking Environment.
- Author
-
PATTNAIK, NANDITA, SHUJUN LI, and NURSE, JASON R. C.
- Subjects
- *
HOME computer networks , *HOME security measures , *HOME environment , *SMART homes , *LITERATURE reviews , *DWELLINGS - Abstract
The security and privacy of smart home systems, particularly from a home user's perspective, have been a very active research area in recent years. However, via a meta-review of 52 review papers covering related topics (published between 2000 and 2021), this article shows a lack of a more recent literature review on user perspectives of smart home security and privacy since the 2010s. This identified gap motivated us to conduct a systematic literature review (SLR) covering 126 relevant research papers published from 2010 to 2021. Our SLR led to the discovery of a number of important areas where further research is needed; these include holisticmethods that consider a more diverse and heterogeneous range of home devices, interactions between multiple home users, complicated data flowbetween multiple home devices and home users, some less studied demographic factors, and advanced conceptual frameworks. Based on these findings, we recommended key future research directions, e.g., research for a better understanding of security and privacy aspects in different multi-device and multi-user contexts, and a more comprehensive ontology on the security and privacy of the smart home covering varying types of home devices and behaviors of different types of home users. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Honeyword-based Authentication Techniques for Protecting Passwords: A Survey.
- Author
-
CHAKRABORTY, NILESH, JIANQIANG LI, LEUNG, VICTOR C. M., MONDAL, SAMRAT, YI PAN, CHENGWEN LUO, and MUKHERJEE, MITHUN
- Subjects
- *
COMPUTER passwords , *SYSTEM administrators - Abstract
Honeyword (or decoy password) based authentication, first introduced by Juels and Rivest in 2013, has emerged as a security mechanism that can provide security against server-side threats on the password-files. From the theoretical perspective, this security mechanism reduces attackers' efficiency to a great extent as it detects the threat on a password-file so that the system administrator can be notified almost immediately as an attacker tries to take advantage of the compromised file. This paper aims to present a comprehensive survey of the relevant research and technological developments in honeyword-based authentication techniques. We cover twenty-three techniques related to honeyword, reported under different research articles since 2013. This survey paper helps the readers to (i) understand how honeyword based security mechanism works in practice, (ii) get a comparative view on the existing honeyword based techniques, and (iii) identify the existing gaps that have yet to be filled and the emergent research opportunities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Achieving High Performance the Functional Way: Expressing High-Performance Optimizations as Rewrite Strategies.
- Author
-
Hagedorn, Bastian, Lenfers, Johannes, Koehler, Thomas, Xueying Qin, Gorlatch, Sergei, and Steuwer, Michel
- Subjects
- *
OPL (Computer program language) , *PROGRAMMING languages , *DOMAIN-specific programming languages , *COMPUTER programming , *ELECTRONIC data processing , *COMPUTER software development - Abstract
Optimizing programs to run efficiently on modern parallel hardware is hard but crucial for many applications. The predominantly used imperative languages force the programmer to intertwine the code describing functionality and optimizations. This results in a portability nightmare that is particularly problematic given the accelerating trend toward specialized hardware devices to further increase efficiency. Many emerging domain-specific languages (DSLs) used in performance-demanding domains such as deep learning attempt to simplify or even fully automate the optimization process. Using a high-level--often functional--language, programmers focus on describing functionality in a declarative way. In some systems such as Halide or TVM, a separate schedule specifies how the program should be optimized. Unfortunately, these schedules are not written in well-defined programming languages. Instead, they are implemented as a set of ad hoc predefined APIs that the compiler writers have exposed. In this paper, we show how to employ functional programming techniques to solve this challenge with elegance. We present two functional languages that work together--each addressing a separate concern. RISE is a functional language for expressing computations using well-known data-parallel patterns. ELEVATE is a functional language for describing optimization strategies. A high-level RISE program is transformed into a low-level form using optimization strategies written in ELEVATE. From the rewritten low-level program, high-performance parallel code is automatically generated. In contrast to existing high-performance domain-specific systems with scheduling APIs, in our approach programmers are not restricted to a set of built-in operations and optimizations but freely define their own computational patterns in RISE and optimization strategies in ELEVATE in a composable and reusable way. We show how our holistic functional approach achieves competitive performance with the state-of-the-art imperative systems such as Halide and TVM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Speculative Taint Tracking (STT): A Comprehensive Protection for Speculatively Accessed Data.
- Author
-
Jiyong Yu, Mengjia Yan, Khyzha, Artem, Morrison, Adam, Torrellas, Josep, and Fletcher, Christopher W.
- Subjects
- *
COMPUTER security , *DATA protection , *MALWARE prevention , *COMPUTER architecture , *COMPUTER performance - Abstract
Speculative execution attacks present an enormous security threat, capable of reading arbitrary program data under malicious speculation, and later exfiltrating that data over microarchitectural covert channels. This paper proposes speculative taint tracking (STT), a high security and high performance hardware mechanism to block these attacks. The main idea is that it is safe to execute and selectively forward the results of speculative instructions that read secrets, as long as we can prove that the forwarded results do not reach potential covert channels. The technical core of the paper is a new abstraction to help identify all microarchitectural covert channels, and an architecture to quickly identify when a covert channel is no longer a threat. We further conduct a detailed formal analysis on the scheme in a companion document. When evaluated on SPEC06 workloads, STT incurs 8.5% or 14.5% performance overhead relative to an insecure machine. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Actionable Auditing Revisited--Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.
- Author
-
Raji, Inioluwa Deborah and Buolamwini, Joy
- Subjects
- *
HUMAN facial recognition software , *AUDITING , *HUMAN skin color , *GENDER , *ERRORS , *SOCIAL responsibility of business - Abstract
Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits and continue to find it difficult to translate such independent assessments into meaningful corporate accountability. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender- and skin-type performance disparities in commercial facial analysis models. This paper (1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, (2) presents new performance metrics from targeted companies such as IBM, Microsoft, and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, (3) provides performance results on PPB by non-target companies such as Amazon and Kairos, and (4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new application program interface (API) versions. All targets reduced accuracy disparities between males and females and darker- and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup that underwent a 17.7-30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72-8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively. This is an expanded version of an earlier publication of these results, revised for a more general audience, and updated to include commentary on further developments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Bias and Debias in Recommender System: A Survey and Future Directions.
- Author
-
JIAWEI CHEN, HANDE DONG, XIANG WANG, FULI FENG, MENG WANG, and XIANGNAN HE
- Abstract
While recent years have witnessed a rapid growth of research papers on recommender system (RS), most of the papers focus on inventing machine learning models to better fit user behavior data. However, user behavior data is observational rather than experimental. This makes various biases widely exist in the data, including but not limited to selection bias, position bias, exposure bias, and popularity bias. Blindly fitting the data without considering the inherent biases will result in many serious issues, e.g., the discrepancy between offline evaluation and online metrics, hurting user satisfaction and trust on the recommendation service, and so on. To transform the large volume of research models into practical improvements, it is highly urgent to explore the impacts of the biases and perform debiasing when necessary. When reviewing the papers that consider biases in RS, we find that, to our surprise, the studies are rather fragmented and lack a systematic organization. The terminology “bias” is widely used in the literature, but its definition is usually vague and even inconsistent across papers. This motivates us to provide a systematic survey of existing work on RS biases. In this paper, we first summarize seven types of biases in recommendation, along with their definitions and characteristics. We then provide a taxonomy to position and organize the existing work on recommendation debiasing. Finally, we identify some open challenges and envision some future directions, with the hope of inspiring more research work on this important yet less investigated topic. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. File Packing from the Malware Perspective: Techniques, Analysis Approaches, and Directions for Enhancements.
- Author
-
MURALIDHARAN, TRIVIKRAM, COHEN, AVIAD, GERSON, NOA, and NISSIM, NIR
- Subjects
- *
MACHINE learning , *MALWARE , *TIME management - Abstract
With the growing sophistication of malware, the need to devise improved malware detection schemes is crucial. The packing of executable files, which is one of the most common techniques for code protection, has been repurposed for code obfuscation by malware authors as a means of evading malware detectors (mainly static analysis-based detectors). This paper provides statistics on the use of packers based on an extensive analysis of 24,000 PE files (both malicious and benign files) for the past 10 years, which allowed us to observe trends in packing use during that time and showed that packing is still widely used in malware. This paper then surveys 23 methods proposed in academic research for the detection and classification of packed portable executable (PE) files and highlights various trends in malware packing. The paper highlights the differences between the methods and their abilities to detect and identify various aspects of packing. A taxonomy is presented, classifying the methods as static, dynamic, and hybrid analysis-based methods. The paper also sheds light on the increasing role of machine learning methods in the development of modern packing detection methods. We analyzed and mapped the different packing methods and identified which of them can be countered by the detection methods surveyed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. EEG Based Emotion Recognition: A Tutorial and Review.
- Author
-
XIANG LI, YAZHOU ZHANG, PRAYAG TIWARI, DAWEI SONG, BIN HU, MEIHONG YANG, ZHIGANG ZHAO, NEERAJ KUMAR, and MARTTINEN, PEKKA
- Subjects
- *
EMOTION recognition , *ELECTROENCEPHALOGRAPHY , *ARTIFICIAL intelligence , *AFFECTIVE computing , *MENTAL health , *HUMAN-computer interaction , *AFFECTIVE neuroscience , *WAKEFULNESS - Abstract
Emotion recognition technology through analyzing the EEG signal is currently an essential concept in Artificial Intelligence and holds great potential in emotional health care, human-computer interaction, multimedia content recommendation, etc. Though there have been several works devoted to reviewing EEG-based emotion recognition, the content of these reviews needs to be updated. In addition, those works are either fragmented in content or only focus on specific techniques adopted in this area but neglect the holistic perspective of the entire technical routes. Hence, in this paper, we review from the perspective of researchers who try to take the first step on this topic. We review the recent representative works in the EEG-based emotion recognition research and provide a tutorial to guide the researchers to start from the beginning. The scientific basis of EEG-based emotion recognition in the psychological and physiological levels is introduced. Further, we categorize these reviewed works into different technical routes and illustrate the theoretical basis and the research motivation, which will help the readers better understand why those techniques are studied and employed. At last, existing challenges and future investigations are also discussed in this paper, which guides the researchers to decide potential future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Introduction to the Special Issue on the Award Papers of USENIX ATC 2019.
- Author
-
Malkhi, Dahlia and Tsafrir, Dan
- Subjects
- *
COMPUTER systems , *AWARDS , *DIGITAL watermarking , *EMAIL - Published
- 2020
- Full Text
- View/download PDF
40. Intel Software Guard Extensions Applications: A Survey.
- Author
-
WILL, NEWTON C. and MAZIERO, CARLOS A.
- Subjects
- *
DATA privacy , *DATA integrity , *COMPUTER systems , *TRUST , *COMPUTER software - Abstract
Data confidentiality is a central concern in modern computer systems and services, as sensitive data from users and companies are being increasingly delegated to such systems. Several hardware-based mechanisms have been recently proposed to enforce security guarantees of sensitive information. Hardware-based isolated execution environments are a class of such mechanisms, in which the operating system and other low-level components are removed from the trusted computing base. One of such mechanisms is the Intel Software Guard Extensions (Intel SGX), which creates the concept of enclave to encapsulate sensitive components of applications and their data. Despite being largely applied in several computing areas, SGX has limitations and performance issues that must be addressed for the development of secure solutions. This text brings a categorized literature review of the ongoing research on the Intel SGX architecture, discussing its applications and providing a classification of the solutions that take advantage of SGX mechanisms. We analyze and categorize 293 papers that rely on SGX to provide integrity, confidentiality, and privacy to users and data, regarding different contexts and goals. We also discuss research challenges and provide future directions in the field of enclaved execution, particularly when using SGX. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Combining Machine Learning and Semantic Web: A Systematic Mapping Study.
- Author
-
BREIT, ANNA, WALTERSDORFER, LAURA, EKAPUTRA, FAJAR J., SABOU, MARTA, EKELHART, ANDREAS, IANA, ANDREEA, PAULHEIM, HEIKO, PORTISCH, JAN, REVENKO, ARTEM, TEIJE, ANNETTE TEN, and VAN HARMELEN, FRANK
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *SEMANTIC Web , *KNOWLEDGE graphs , *DEEP learning , *KNOWLEDGE representation (Information theory) - Abstract
In line with the general trend in artificial intelligence research to create intelligent systems that combine learning and symbolic components, a new sub-area has emerged that focuses on combining Machine Learning components with techniques developed by the SemanticWeb community--SemanticWebMachine Learning (SWeML). Due to its rapid growth and impact on several communities in thepast two decades, there is a need to better understand the space of these SWeML Systems, their characteristics, and trends. Yet, surveys that adopt principled and unbiased approaches are missing. To fill this gap, we performed a systematic study and analyzed nearly 500 papers published in the past decade in this area, where we focused on evaluating architectural and application-specific features. Our analysis identified a rapidly growing interest in SWeML Systems, with a high impact on several application domains and tasks. Catalysts for this rapid growth are the increased application of deep learning and knowledge graph technologies. By leveraging the in-depth understanding of this area acquired through this study, a further key contribution of this article is a classification system for SWeML Systems that we publish as ontology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. AI-Empowered Persuasive Video Generation: A Survey.
- Author
-
CHANG LIU and HAN YU
- Subjects
- *
SOFT power (Social sciences) , *PROMOTIONAL films , *SOCIAL enterprises , *ENVIRONMENTAL music , *ONLINE shopping , *3-D animation - Abstract
Promotional videos are rapidly becoming a popularmedium for persuading people to change their behaviours in many settings (e.g., online shopping, social enterprise initiatives). Today, such videos are often produced by professionals, which is a time-, labour- and cost-intensive undertaking. In order to produce such contents to support large applications (e.g., e-commerce), the field of artificial intelligence (AI)-empowered persuasive video generation (AIPVG) has gained traction in recent years. This field is interdisciplinary in nature, which makes it challenging for new researchers to grasp. Currently, there is no comprehensive survey of AIPVG available. In this paper, we bridge this gap by reviewing key AI techniques that can be utilized to automatically generate persuasive videos. We offer a first-of-its-kind taxonomy which divides AIPVG into three major steps: (1) visual material understanding, which extracts information from the visual materials (VMs) relevant to the target of promotion; (2) visual storyline generation, which shortlists and arranges highquality VMs into a sequence in order to compose a storyline with persuasive power; and (3) post-production, which involves background music generation and still image animation to enhance viewing experience. We also introduce the evaluation metrics and datasets commonly adopted in the field of AIPVG. We analyze the advantages and disadvantages of the existing works belonging to the above-mentioned steps, and discuss interesting potential future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Representation Bias in Data: A Survey on Identification and Resolution Techniques.
- Author
-
SHAHBAZI, NIMA, YIN LIN, ASUDEH, ABOLFAZL, and JAGADISH, H. V.
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *ACQUISITION of data - Abstract
Data-driven algorithms are only as good as the data they work with, while datasets, especially social data, often fail to represent minorities adequately. Representation Bias in data can happen due to various reasons, ranging from historical discrimination to selection and sampling biases in the data acquisition and preparation methods. Given that "bias in, bias out," one cannot expect AI-based solutions to have equitable outcomes for societal applications, without addressing issues such as representation bias. While there has been extensive study of fairness in machine learning models, including several review papers, bias in the data has been less studied. This article reviews the literature on identifying and resolving representation bias as a feature of a dataset, independent of how consumed later. The scope of this survey is bounded to structured (tabular) and unstructured (e.g., image, text, graph) data. It presents taxonomies to categorize the studied techniques based on multiple design dimensions and provides a side-by-side comparison of their properties. There is still a long way to fully address representation bias issues in data. The authors hope that this survey motivates researchers to approach these challenges in the future by observing existing work within their respective domains. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities.
- Author
-
YISHENG SONG, TING WANG, PUYU CAI, MONDAL, SUBROTA K., and SAHOO, JYOTI PRAKASH
- Subjects
- *
COMPUTER vision , *PYRAMIDS , *PRIOR learning - Abstract
Few-shot learning (FSL) has emerged as an effective learning method and shows great potential. Despite the recent creative works in tackling FSL tasks, learning valid information rapidly from just a few or even zero samples remains a serious challenge. In this context, we extensively investigated 200+ FSL papers published in top journals and conferences in the past three years, aiming to present a timely and comprehensive overview of the most recent advances in FSL with a fresh perspective and to provide an impartial comparison of the strengths and weaknesses of existing work. To avoid conceptual confusion, we first elaborate and contrast a set of relevant concepts including few-shot learning, transfer learning, and meta-learning. Then, we inventively extract prior knowledge related to few-shot learning in the form of a pyramid, which summarizes and classifies previous work in detail from the perspective of challenges. Furthermore, to enrich this survey, we present in-depth analysis and insightful discussions of recent advances in each subsection. What is more, taking computer vision as an example, we highlight the important application of FSL, covering various research hotspots. Finally, we conclude the survey with unique insights into technology trends and potential future research opportunities to guide FSL follow-up research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Multimodal Sentiment Analysis: A Survey of Methods, Trends, and Challenges.
- Author
-
DAS, RINGKI and SINGH, THOUDAM DOREN
- Subjects
- *
SENTIMENT analysis , *NATURAL language processing , *USER-generated content , *ELECTRIC transformers , *TASK analysis - Abstract
Sentiment analysis has come long way since it was introduced as a natural language processing task nearly 20 years ago. Sentiment analysis aims to extract the underlying attitudes and opinions toward an entity. It has become a powerful tool used by governments, businesses, medicine, marketing, and others. The traditional sentiment analysis model focuses mainly on text content. However, technological advances have allowed people to express their opinions and feelings through audio, image and video channels. As a result, sentiment analysis is shifting from unimodality to multimodality. Multimodal sentiment analysis brings new opportunities with the rapid increase of sentiment analysis as complementary data streams enable improved and deeper sentiment detection which goes beyond text-based analysis. Audio and video channels are included in multimodal sentiment analysis in terms of broadness. People have been working on different approaches to improve sentiment analysis system performance by employing complex deep neural architectures. Recently, sentiment analysis has achieved significant success using the transformer-based model. This paper presents a comprehensive study of different sentiment analysis approaches, applications, challenges, and resources then concludes that it holds tremendous potential. The primary motivation of this survey is to highlight changing trends in the unimodality to multimodality for solving sentiment analysis tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. A Survey on Underwater Computer Vision.
- Author
-
GONZÁLEZ-SABBAGH, SALMA P. and ROBLES-KELLY, ANTONIO
- Subjects
- *
COMPUTER vision , *INTERNET surveys , *IMAGE reconstruction , *IMAGE processing , *SUBMERSIBLES , *OBJECT recognition (Computer vision) , *IMAGE recognition (Computer vision) , *AUTONOMOUS underwater vehicles , *AUTOMOBILE license plates - Abstract
Underwater computer vision has attracted increasing attention in the research community due to the recent advances in underwater platforms such as of rovers, gliders, autonomous underwater vehicles (AUVs), and the like, that now make possible the acquisition of vast amounts of imagery and video for applications such as biodiversity assessment, environmental monitoring, and search and rescue. Despite growing interest, underwater computer vision is still a relatively under-researched area, where the attention in the literature has been paid to the use of computer vision techniques for image restoration and reconstruction, where image formation models and image processing methods are used to recover colour corrected or enhanced images. This is due to the notion that these methods can be used to achieve photometric invariants to perform higher-level vision tasks such as shape recovery and recognition under the challenging and widely varying imaging conditions that apply to underwater scenes. In this paper, we review underwater computer vision techniques for image reconstruction, restoration, recognition, depth, and shape recovery. Further, we review current applications such as biodiversity assessment, management and protection, infrastructure inspection and AUVs navigation, amongst others. We also delve upon the current trends in the field and examine the challenges and opportunities in the area. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. State of Practical Applicability of Regression Testing Research: A Live Systematic Literature Review.
- Author
-
GRECA, RENAN, MIRANDA, BRENO, and BERTOLINO, ANTONIA
- Subjects
- *
COMPUTER software testing , *TEST systems , *SCALABILITY - Abstract
Context: Software regression testing refers to rerunning test cases after the system under test is modified, ascertaining that the changes have not (re-)introduced failures. Not all researchers' approaches consider applicability and scalability concerns, and not many have produced an impact in practice. Objective: One goal is to investigate industrial relevance and applicability of proposed approaches. Another is providing a live review, open to continuous updates by the community. Method: A systematic review of regression testing studies that are clearly motivated by or validated against industrial relevance and applicability is conducted. It is complemented by follow-up surveys with authors of the selected papers and 23 practitioners. Results: A set of 79 primary studies published between 2016-2022 is collected and classified according to approaches and metrics. Aspects relative to their relevance and impact are discussed, also based on their authors' feedback. All the data are made available from the live repository that accompanies the study. Conclusions: While widely motivated by industrial relevance and applicability, not many approaches are evaluated in industrial or large-scale open-source systems, and even fewer approaches have been adopted in practice. Some challenges hindering the implementation of relevant approaches are synthesized, also based on the practitioners' feedback. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Deep Person Generation: A Survey from the Perspective of Face, Pose, and Cloth Synthesis.
- Author
-
TONG SHA, WEI ZHANG, TONG SHEN, ZHOUJUN LI, and TAO MEI
- Subjects
- *
DATA augmentation , *VIDEOCONFERENCING , *DEEP learning , *GENERATIVE adversarial networks , *TEXTILES , *GAZE - Abstract
Deep person generation has attracted extensive research attention due to its wide applications in virtual agents, video conferencing, online shopping, and art/movie production. With the advancement of deep learning, visual appearances (face, pose, cloth) of a person image can be easily generated on demand. In this survey, we first summarize the scope of person generation, and then systematically review recent progress and technical trends in identity-preserving deep person generation, covering three major tasks: talking-head generation (face), pose-guided person generation (pose), and garment-oriented person generation (cloth). More than two hundred papers are covered for a thorough overview, and the milestone works are highlighted to witness the major technical breakthrough. Based on these fundamental tasks, many applications are investigated, e.g., virtual fitting, digital human, and generative data augmentation. We hope this survey could shed some light on the future prospects of identity-preserving deep person generation, and provide a helpful foundation for full applications towards the digital human. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Attention-guided Adversarial Attack for Video Object Segmentation.
- Author
-
Yao, Rui, Chen, Ying, Zhou, Yong, Hu, Fuyuan, Zhao, Jiaqi, Liu, Bing, and Shao, Zhiwen
- Subjects
- *
DEEP learning , *VIDEOS - Abstract
Video Object Segmentation (VOS) methods have made many breakthroughs with the help of the continuous development and advancement of deep learning. However, the deep learning model is vulnerable to malicious adversarial attacks, which mislead the model to make wrong decisions by adding adversarial perturbation that humans cannot perceive to the input image. Threats to deep learning models remind us that video object segmentation methods are also vulnerable to attacks, thereby threatening their security. Therefore, we study adversarial attacks on the VOS task to better identify the vulnerabilities of the VOS method, which in turn provides an opportunity to improve its robustness. In this paper, we propose an attention-guided adversarial attack method, which uses spatial attention blocks to capture features with global dependencies to construct correlations between consecutive video frames, and performs multipath aggregation to effectively integrate spatial-temporal perturbation, thereby guiding the deconvolution network to generate adversarial examples with strong attack capability. Specifically, the class loss function is designed to enable the deconvolution network to better activate noise in other regions and suppress the activation related to the object class based on the enhanced feature map of the object class. At the same time, attentional feature loss is designed to enhance the transferability against attack. The experimental results on the DAVIS dataset show that the proposed attention-guided adversarial attack method can significantly reduce the segmentation accuracy of OSVOS, and the J&F mean on DAVIS 2016 can reach 73.6% drop rate. The generated adversarial examples are also highly transferable to other video object segmentation models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Scheduling of Resource Allocation Systems with Timed Petri Nets: A Survey.
- Author
-
BO HUANG, MENGCHU ZHOU, XIAOYU SEAN LU, and ABUSORRAH, ABDULLAH
- Subjects
- *
PETRI nets , *RESOURCE allocation , *DISCRETE systems , *SCHEDULING - Abstract
Resource allocation systems (RASs) belong to a kind of discrete event system commonly seen in the industry. In such systems, available resources are allocated to concurrently running processes to optimize some performance criteria. Search strategies in the reachability graph (RG) of a timed Petri net (PN) attracted much attention in the past decades to cope with RAS scheduling problems (RSPs), since PNs are very suitable to model and analyze RASs and their RGs fully reflect systems’ behavior. However, there has been no existing related survey and review paper till now. In this work, we present a tutorial and comprehensive literature survey of RG-based RSP methods. Many state-of-the-art RG-based RAS scheduling strategies are reviewed and summarized. First, we present a framework of RSPs and classify RSPs and their PNs in terms of resource usage and net structures. The differences and relations among the PNs are also given. Then, we introduce timed PN construction methods for RSPs and scheduling objectives and search strategies for RG-based RSPs. Next, we summarize different heuristic functions adopted in a frequently used A ∗ search to solve RG-based RSPs. Finally, we discuss some important future research directions and open issues. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.