424 results
Search Results
2. SCHEDULING AND PLANNING APPLICATIONS: SELECTED PAPERS FROM THE SPARK WORKSHOP SERIES
- Author
-
Neil Yorke-Smith, Gabriella Cortellessa, and Luis Castillo Vidal
- Subjects
Computational Mathematics ,Artificial Intelligence ,Computer science ,Scheduling (production processes) ,Industrial engineering - Published
- 2011
3. Explainable artificial intelligence for medical imaging: Review and experiments with infrared breast images.
- Author
-
Raghavan, Kaushik, Balasubramanian, Sivaselvan, and Veezhinathan, Kamakoti
- Subjects
- *
BREAST , *ARTIFICIAL intelligence , *COMPUTER-assisted image analysis (Medicine) , *INFRARED imaging , *MACHINE learning , *BREAST imaging , *DEEP learning , *ASSISTIVE technology - Abstract
There is a growing trend of using artificial intelligence, particularly deep learning algorithms, in medical diagnostics, revolutionizing healthcare by improving efficiency, accuracy, and patient outcomes. However, the use of artificial intelligence in medical diagnostics comes with the critical need to explain the reasoning behind artificial intelligence‐based predictions and ensure transparency in decision‐making. Explainable artificial intelligence has emerged as a crucial research area to address the need for transparency and interpretability in medical diagnostics. Explainable artificial intelligence techniques aim to provide insights into the decision‐making process of artificial intelligence systems, enabling clinicians to understand the factors the algorithms consider in reaching their predictions. This paper presents a detailed review of saliency‐based (visual) methods, such as class activation methods, which have gained popularity in medical imaging as they provide visual explanations by highlighting the regions of an image most influential in the artificial intelligence's decision. We also present the literature on non‐visual methods, but the focus will be on visual methods. We also use the existing literature to experiment with infrared breast images for detecting breast cancer. Towards the end of this paper, we also propose an "attention guided Grad‐CAM" that enhances the visualizations for explainable artificial intelligence. The existing literature shows that explainable artificial intelligence techniques are not explored in the context of infrared medical images and opens up a wide range of opportunities for further research to make clinical thermography into assistive technology for the medical community. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Distributed system anomaly detection using deep learning‐based log analysis.
- Author
-
Han, Pengfei, Li, Huakang, Xue, Gang, and Zhang, Chao
- Subjects
DEEP learning ,ARTIFICIAL intelligence ,SAWLOGS ,SPATIAL memory ,NATURAL languages - Abstract
Anomaly detection is a key step in ensuring the security and reliability of large‐scale distributed systems. Analyzing system logs through artificial intelligence methods can quickly detect anomalies and thus help maintenance personnel to maintain system security. Most of the current works only focus on the temporal or spatial features of distributed system logs, and they cannot sufficiently extract the global features of distributed system logs to achieve a good correct rate of anomaly detection. To further address the shortcomings of existing methods, this paper proposes a deep learning model with global spatiotemporal features to detect the presence of anomalies in distributed system logs. First, we extract semi‐structured log events from log templates and model them as natural language. In addition, we focus on the temporal characteristics of logs using the bidirectional long short‐term memory network and the spatial invocation characteristics of logs using the Transformer. Extensive experimental evaluations show the advantages of our proposed model for distributed system log anomaly detection tasks. The optimal F1‐Score on three open‐source datasets and our own collected distributed system datasets reach 98.04%, 94.34%, 88.16%, and 97.40%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Application of computer information technology in college physical education using fuzzy evaluation theory.
- Subjects
PHYSICAL education ,COMPUTER engineering ,ARTIFICIAL intelligence ,COMPUTERS in education ,INFORMATION technology - Abstract
The educational sector faces a new dimension that is dominated by lifelong learning and is affected by the technical, social, and cultural changes. This pattern represents the need to improve the teaching methods for physical education and sports science. The use of computers and other information technology to increase the effectiveness of the teaching process is a modern method. This paper aims to illustrate the use of information and communication technologies (ICT) in physical education and sports. In our field, the gradual computerization results can be summed up in the following aspects: education software, design, and planning activities, recording outcomes, motion monitoring, video analysis, comparison of performance and synchronizing, measurements at distance and time and the evaluation of the activity. Although physical education and sports are practical activities, specialists can make use of modern teaching technologies. In this paper, the system of curriculum assessment for physical education has been analyzed and researched in computer assessment. The first section introduced the method of assessment of the physical education program. The second phase of the paper represents a teaching model of the physical education mathematical model utilizing the Comprehensive Adaptive Fuzzy Evaluation Theory has been proposed. A new level is the modernization of physics education with the artificial intelligence computer education system built in this paper. The experimental results have high performance in detecting the physical activity of college students. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Realization of interactive animation creation based on artificial intelligence technology.
- Author
-
Cai, YuXin, Dong, Haibin, Wang, Wei, and Song, Hongchang
- Subjects
ARTIFICIAL intelligence ,PIXELS ,SELF-expression ,HUMAN body ,HUMAN-computer interaction ,ELECTRONIC data processing ,3-D animation ,TELEVISION broadcasting of films - Abstract
In film and television animation works, animated characters are the soul and core of the work. The behavior, language expression, and emotional expression of animated characters play an important role in the expression of the animation theme and content. Aiming at the problem that the mobile animation system can only add and change actions for a single virtual character, and the characters cannot interact with each other, this paper analyzes the technical principles, technical characteristics, and application scope of human–computer interaction (HCI), taking sensors as the research object. An algorithm for separating the human body from the background environment in the depth image is proposed. Through the calculation of the depth value, the calculation results are compared, and the target human body and the background are effectively separated. In the depth data processing, the algorithm of judging the pixel offset value is used to identify the body part, and a sensor‐based HCI system is designed. The depth‐of‐field data map acquired by the sensor is used to identify human body parts and determine actions, thereby realizing HCI based on action recognition. Simulation test results show that the effective rate of the system is 80%, and the design of animated characters can be put into the visualization stage. Using the algorithm in this paper, the physical signs of the animated characters can be quickly identified, so that the next action of the animation can be more clearly captured. Has a certain practical value. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Modeling multiple interactions with a Markov random field in query expansion for session search.
- Author
-
Li, Jingfei, Zhao, Xiaozhao, Zhang, Peng, and Song, Dawei
- Subjects
MARKOV processes ,SEARCH engines ,MARKOV random fields ,QUERY (Information retrieval system) ,ARTIFICIAL intelligence ,INFORMATION retrieval - Abstract
Abstract: How to automatically understand and answer users' questions (eg, queries issued to a search engine) expressed with natural language has become an important yet difficult problem across the research fields of information retrieval and artificial intelligence. In a typical interactive Web search scenario, namely, session search, to obtain relevant information, the user usually interacts with the search engine for several rounds in the forms of, eg, query reformulations, clicks, and skips. These interactions are usually mixed and intertwined with each other in a complex way. For the ideal goal, an intelligent search engine can be seen as an artificial intelligence agent that is able to infer what information the user needs from these interactions. However, there still exists a big gap between the current state of the art and this goal. In this paper, in order to bridge the gap, we propose a Markov random field–based approach to capture dependence relations among interactions, queries, and clicked documents for automatic query expansion (as a way of inferring the information needs of the user). An extensive empirical evaluation is conducted on large‐scale web search data sets, and the results demonstrate the effectiveness of our proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. ResNLS: An improved model for stock price forecasting.
- Author
-
Jia, Yuanzhe, Anaissi, Ali, and Suleiman, Basem
- Subjects
- *
STOCK price forecasting , *DEEP learning , *MACHINE learning , *STOCK prices , *PRICES - Abstract
Stock prices forecasting has always been a challenging task. Although many research projects adopt machine learning and deep learning algorithms to address the problem, few of them pay attention to the varying degrees of dependencies between stock prices. In this paper we introduce a hybrid model that improves stock price prediction by emphasizing the dependencies between adjacent stock prices. The proposed model, ResNLS, is mainly composed of two neural architectures, ResNet and LSTM. ResNet serves as a feature extractor to identify dependencies between stock prices across time windows, while LSTM analyses the initial time‐series data with the combination of dependencies which considered as residuals. In predicting the SSE Composite Index, our experiment reveals that when the closing price data for the previous five consecutive trading days is used as the input, the performance of the model (ResNLS‐5) is optimal compared to those with other inputs. Furthermore, ResNLS‐5 outperforms vanilla CNN, RNN, LSTM, and BiLSTM models in terms of prediction accuracy. It also demonstrates at least a 20% improvement over the current state‐of‐the‐art baselines. To verify whether ResNLS‐5 can help clients effectively avoid risks and earn profits in the stock market, we construct a quantitative trading framework for back testing. The experimental results show that the trading strategy based on predictions from ResNLS‐5 can successfully mitigate losses during declining stock prices and generate profits in the periods of rising stock prices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Design of online intelligent English teaching platform based on artificial intelligence techniques.
- Author
-
Sun, Zhuomin, Anbarasan, M., and Praveen Kumar, D.
- Subjects
ARTIFICIAL intelligence ,INTELLIGENT tutoring systems ,SYSTEMS theory ,ALGORITHMS ,ONLINE education ,RECOMMENDER systems - Abstract
Artificial intelligence education (AIEd) is defined in the field of education as the utilization of artificial intelligence. There are currently many AIEd‐driven applications in schools and universities. This paper applies an artificial intelligence module combined with the knowledge recommendation to the system and develops an online English teaching system in comparison with the common teaching auxiliary system. The method of English teaching is useful in investigating the potential internal connections between evaluation outcomes and various factors. This article develops deep learning‐assisted online intelligent English teaching system that utilizes to create a modern tool platform to help students improve their English language teaching efficiency in line with their mastery of knowledge and personality. The decision tree algorithm and neural networks have been used and to generate an English teaching assessment implementation model based on decision tree technologies. It provides valuable data from extensive information, summarizes rules and data, and helps teachers to improve their education and the English scores of students. This system reflects the thinking of the artificial intelligence expert system. Test application demonstrates that the system can help students improve their learning efficiency and will make learning content more relevant. Besides, the system provides an example model with similar methods and has a referential definition. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Residual neural network‐assisted one‐class classification algorithm for melanoma recognition with imbalanced data.
- Author
-
Yu, Lisu, Wang, Yifei, Zhou, Liyu, Wu, Jinsheng, and Wang, Zhenghai
- Subjects
- *
CLASSIFICATION algorithms , *IMAGE recognition (Computer vision) , *MELANOMA , *SKIN cancer , *ARTIFICIAL intelligence , *DECODING algorithms , *LINEAR network coding - Abstract
Skin cancer, also known as melanoma, is a deadly form of skin cancer that can significantly improve survival rates when diagnosed at an early stage. It is usually diagnosed visually from dermoscopic images, and such visual assessment of skin cancer by the naked eye is a challenging and arduous task. Therefore, the detection of melanoma from dermoscopic images using trained artificial intelligence models is of great importance today. However, since melanoma is a rare disease, existing databases of skin lesions often contain highly unbalanced numbers of benign and malignant samples. In this paper, we propose a new one‐class classification‐based skin lesion classification strategy for small and unbalanced datasets. One‐class classification (OCC) is a special case of multi‐classification. OCC aims to learn a descriptive paradigm from positive class data (true data) during training and reject pseudo data (fake data) that do not conform to the paradigm during inference. OCC has great potential for application in anomaly detection problems. We have analyzed several approaches to the OCC task in recent years and propose a new design paradigm for the OCC problem, taking into account the unbalanced data set of the melanoma classification task. We have designed an improved OCC network based on this design paradigm, where the network is based on the architecture of a residual neural network, combining the coding and decoding idea of variational self‐encoder and the adversarial training idea of an adversarial neural network, using binary cross‐entropy as the loss function and introducing the channel attention mechanism. Tests on several publicly available dermatology datasets show that this improved OCC network addresses the unbalanced dataset situation in melanoma image classification to some extent while having relatively excellent performance. Compared with some traditional networks, it can obtain more stable training results and perform more consistently on complex datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Artificial intelligence‐based wind forecasting using variational mode decomposition.
- Author
-
V, Vanitha, G, Sophia J., R, Resmi, and Raphel, Delna
- Subjects
WIND speed ,ARTIFICIAL intelligence ,FUZZY logic ,WIND forecasting ,WIND power ,ELECTRIC power distribution grids - Abstract
Intermittency in wind offers the major challenge in accomplishing the wind energy as a dependable sustainable energy resource in power grid. Fluctuations in wind speed occur seasonally over a year and if this seasonality is considered, the prediction of the speed of wind can be made more accurate. In this paper, an attempt is made to apply a signal decomposition technique called Variational Mode Decomposition (VMD), which decomposes series of wind speed data into several intrinsic mode functions (IMFs) to make the data more regular thereby enhancing the accuracy of the wind speed forecast model. Then, artificial intelligence technique, Adaptive neuro fuzzy inference system (ANFIS) is applied for the wind speed prediction by combining the obtained modes from VMD. Here, wind data of two sites in India, Jogimatti and Lamba are taken for the study. Each site data is grouped into high and low wind speed months and later, this series is decomposed into regular modes using VMD. Later, ANFIS is applied for training and predicting the wind speed for different time horizons. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. A values‐driven self‐organization mechanism for automating multiagent coordination.
- Author
-
Jiao, Wenpin and Sun, Yanchun
- Subjects
MULTIAGENT systems ,INFORMATION sharing ,COMMUNICATION & technology ,AUTOMATION ,ARTIFICIAL intelligence - Abstract
Abstract: In distributed and open environments, MASs (multiagent systems) generally have no mechanisms for prior coordination and self‐organization has been believed to be the necessary selection to achieve the coordination of agents. This paper first presents a values‐driven model for self‐organization in which the expected emergent properties of a system are specified as the social values while the social values are realized via implicitly inducing members to regulate their individual values and adjust their behaviors to fit the expectations of the system. Based on the values‐driven self‐organization, this paper proposes an automated coordination mechanism for decentralized MASs. In this mechanism, by indirectly changing the difficulties in acquiring resources (which may be delegated to some special agents since MASs generally do not have substantial bodies), MASs can lead agents to regulate their values to be consistent with the social values of MASs so that the coordination of MASs can spontaneously emerge from the local behaviors of agents. Finally, this paper implements a simulation traffic system using the coordination mechanism based on values‐driven self‐organization to validate the emergence of coordination among multiple agents. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. Structural plan similarity based on refinements in the space of partial plans.
- Author
-
Sánchez‐Ruiz, Antonio A. and Ontañón, Santiago
- Subjects
ARTIFICIAL intelligence ,LINUX operating systems ,CONSTRAINT satisfaction ,OPERATOR theory ,DIGITAL storytelling - Abstract
Plan similarity measures play a key role in many areas of artificial intelligence, such as case-based planning, plan recognition, ambient intelligence, or digital storytelling. In this paper, we present 2 novel structural similarity measures to compare plans based on a search process in the space of partial plans. Partial plans are compact representations of sets of plans with some common structure and can be organized in a lattice so that the most general partial plans are above the most specific ones. To compute our similarity measures, we traverse this space of partial plans from the most general to the most specific using successive refinements. Our first similarity measure is designed for propositional plan formalisms, and the second is designed for classical planning formalisms (including variables and types). We also introduce 2 novel refinement operators used to traverse the space of plans: an ideal downward refinement operator for propositional partial plans and a finite and complete downward refinement operator for classical partial plans. Finally, we evaluate our similarity measures in the context of a nearest neighbor classifier using 2 datasets commonly used in the plan recognition literature (Linux and Monroe), showing good results in both synthetic and real data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Preface.
- Author
-
Cercone, Nick, Skowron, Andrzej, and Zhong, Ning
- Subjects
ARTIFICIAL intelligence - Abstract
Introduces a series of articles on artificial intelligence which were published in the August 2001 issue of `Computational Intelligence.'
- Published
- 2001
- Full Text
- View/download PDF
15. Improved infrared small target detection and tracking method based on new intelligence particle filter.
- Author
-
Chen, Zhimin, Tian, Mengchu, Bo, Yuming, and Ling, Xiaodong
- Subjects
MONTE Carlo method ,ALGORITHMS ,EMISSIVITY ,COMPUTATIONAL intelligence ,ARTIFICIAL intelligence - Abstract
Abstract: Track‐before‐detect algorithm based on the particle filter algorithm has the problems of low tracking precision, poor particles, and requiring a large amount of particles to be calculated in a low signal‐to‐noise ratio, which is difficult to meet the accuracy and speed required by the modern infrared search and tracking system. In this paper, an improved infrared small target detection and tracking method based on a new particle filter is proposed. This is where particles are used to represent an individual bat to imitate the hunting process of bats. By adjusting loudness, frequency, and impulse emissivity of a particle swarm, the optimal particle at that time is followed to search in the solution space. In addition, the global search and the local search can also be dynamically switched to improve the quality and distribution of the particle swarm. The performance of the proposed algorithm is tested in a simulation scene and the real scene of the infrared small target detection and tracking. Experimental results show that the proposed algorithm improves the performance of the infrared searching and tracking system. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. Preface.
- Author
-
Junker, Ulrich, Delgrande, James, Doyle, Jon, Rossi, Francesca, and Schaub, Torsten
- Subjects
DECISION theory ,CONSTRAINT satisfaction ,ARTIFICIAL intelligence ,REASONING ,PREFERENCES (Philosophy) - Abstract
Presents an overview of articles on decision theory, constraints and reasoning. Discussion on the nature, roles, origins, representation, structure, and change of preferences; Analysis of ceteris paribus preferences between arbitrary propositional formulas; Information on different methods for aggregating qualitative preference values.
- Published
- 2004
- Full Text
- View/download PDF
17. A recent survey on the applications of genetic programming in image processing.
- Author
-
Khan, Asifullah, Qureshi, Aqsa Saeed, Wahab, Noorul, Hussain, Mutawarra, and Hamza, Muhammad Yousaf
- Subjects
IMAGE processing ,GENETIC programming ,COMPUTER vision ,ARTIFICIAL intelligence ,IMAGE compression ,MULTISPECTRAL imaging ,FEATURE selection - Abstract
Genetic programming (GP) has been primarily used to tackle optimization, classification, and feature selection related tasks. The widespread use of GP is due to its flexible and comprehensible tree‐type structure. Similarly, research is also gaining momentum in the field of image processing, because of its promising results over vast areas of applications ranging from medical image processing to multispectral imaging. Image processing is mainly involved in applications such as computer vision, pattern recognition, image compression, storage, and medical diagnostics. This universal nature of images and their associated algorithm, that is, complexities, gave an impetus to the exploration of GP. GP has thus been used in different ways for image processing since its inception. Many interesting GP techniques have been developed and employed in the field of image processing, and consequently, we aim to provide the research community an extensive view of these techniques. This survey thus presents the diverse applications of GP in image processing and provides useful resources for further research. In addition, the comparison of different parameters used in different applications of image processing is summarized in tabular form. Moreover, analysis of the different parameters used in image processing related tasks is carried‐out to save the time needed in the future for evaluating the parameters of GP. As more advancement is made in GP methodologies, its success in solving complex tasks, not only in image processing but also in other fields, may increase. In addition, guidelines are provided for applying GP in image processing related tasks, the pros and cons of GP techniques are discussed, and some future directions are also set. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. A LATTICE-BASED APPROACH TO THE PROBLEM OF RECRUITMENT IN MULTIAGENT SYSTEMS.
- Author
-
Amigoni, Francesco and Continanza, Luca
- Subjects
MULTIAGENT systems ,ARTIFICIAL intelligence ,DISTRIBUTED computing ,COMPUTER algorithms ,PROBLEM solving ,LATTICE theory - Abstract
Multiagent systems constitute an independent topic at the intersection between distributed computing and artificial intelligence. As the algorithmic techniques and the applications for multiagent systems have been continuously developing over the last two decades reaching significantly mature stages, many methodological problems have been addressed. In this paper, we aim to contribute to this methodological assessment of multiagent systems by considering the problem of choosing, or recruiting, a subset of agents from a set of available agents to satisfy a given request. This problem, which we call problem of recruitment, is encountered, for example, in matchmaking and in task allocation. We present and study a novel formal approach to the problem of recruitment, based on the algebraic formalism of lattices. The resulting formal framework can support the development of algorithms for automatic recruitment. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
19. EXPLOITING SUBTREES IN AUTO-PARSED DATA TO IMPROVE DEPENDENCY PARSING.
- Author
-
Chen, Wenliang, Kazama, Jun'ichi, Uchimoto, Kiyotaka, and Torisawa, Kentaro
- Subjects
NATURAL language processing ,PARSING (Computer grammar) ,ELECTRONIC data processing ,ARTIFICIAL intelligence ,HUMAN-computer interaction ,LANGUAGE & languages ,SEMANTIC computing - Abstract
Dependency parsing has attracted considerable interest from researchers and developers in natural language processing. However, to obtain a high-accuracy dependency parser, supervised techniques require a large volume of hand-annotated data, which are extremely expensive. This paper presents a simple and effective approach for improving dependency parsing with subtrees derived from unannotated data, which are easy to obtain. First, we use a baseline parser to parse large-scale unannotated data. Then, we extract subtrees from dependency parse trees in the auto-parsed data. Next, the extracted subtrees are classified into several sets according to their frequency. Finally, we design new features based on the subtree sets for parsing algorithms. To demonstrate the effectiveness of our proposed approach, we conduct experiments on the English Penn Treebank and Chinese Penn Treebank. The results show that our approach significantly outperforms baseline systems. It also achieves the best accuracy for the Chinese data and an accuracy competitive with the best known systems for the English data. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
20. HIGH-PRECISION BIO-MOLECULAR EVENT EXTRACTION FROM TEXT USING PARALLEL BINARY CLASSIFIERS.
- Author
-
Van Landeghem, Sofie, De Baets, Bernard, Van de Peer, Yves, and Saeys, Yvan
- Subjects
DATA mining ,MACHINE learning ,TEXT mining ,CLASSIFIERS (Linguistics) ,BIOLOGY ,ARTIFICIAL intelligence - Abstract
We have developed a machine learning framework to accurately extract complex genetic interactions from text. Employing type-specific classifiers, this framework processes research articles to extract various biological events. Subsequently, the algorithm identifies regulation events that take other events as arguments, allowing a nested structure of predictions. All predictions are merged into an integrated network, useful for visualization and for deduction of new biological knowledge. In this paper, we discuss several design choices for an event-based extraction framework. These detailed studies help improving on existing systems, which is illustrated by the relative performance gain of 10% of our system compared to the official results in the recent BioNLP'09 Shared Task. Our framework now achieves state-of-the-art performance with 37.43 recall, 54.81 precision and 44.48 F-score. We further present the first study of feature selection for bio-molecular event extraction from text. While producing more cost-effective models, feature selection can also lead to a better insight into the complexity of the challenge. Finally, this paper tries to bridge the gap between theoretical relation extraction from text and experimental work on bio-molecular interactions by discussing interesting opportunities to employ event-based text mining tools for real-life tasks such as hypothesis generation, database curation and knowledge discovery. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
21. PRICE DYNAMICS, INFORMATIONAL EFFICIENCY, AND WEALTH DISTRIBUTION IN CONTINUOUS DOUBLE-AUCTION MARKETS.
- Author
-
Gil-Bazo, Javier, Moreno, David, and Tapia, Mikel
- Subjects
FINANCIAL markets ,INFORMATION dissemination ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTATIONAL intelligence - Abstract
This paper studies the properties of the continuous double-auction trading mechanism using an artificial market populated by heterogeneous computational agents. In particular, we investigate how changes in the population of traders and in market microstructure characteristics affect price dynamics, information dissemination, and distribution of wealth across agents. In our computer-simulated market only a small fraction of the population observe the risky asset's fundamental value with noise, while the rest of the agents try to forecast the asset's price from past transaction data. In contrast to other artificial markets, we assume that the risky asset pays no dividend, thus agents cannot learn from past transaction prices and subsequent dividend payments. We find that private information can effectively disseminate in the market unless market regulation prevents informed investors from short selling or borrowing the asset, and these investors do not constitute a critical mass. In such case, not only are markets less efficient informationally, but may even experience crashes and bubbles. Finally, increased informational efficiency has a negative impact on informed agents' trading profits and a positive impact on artificial intelligent agents' profits. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
22. COMPARING PRONOUN RESOLUTION ALGORITHMS.
- Author
-
Mitkov, Ruslan and Hallett, Catalina
- Subjects
NATURAL language processing ,ARTIFICIAL intelligence ,COMPUTATIONAL intelligence ,HUMAN-computer interaction ,ELECTRONIC data processing - Abstract
This paper discusses the comparative evaluation of five well-known pronoun resolution algorithms conducted with the help of a purpose-built tool for consistent evaluation in anaphora resolution, termed the evaluation workbench. The workbench enables the evaluation and comparison of pronoun resolution algorithms on the basis of the same preprocessing tools and test data. The tool is controlled by the user who can conduct the evaluation according to a variety of parameters, with regard to the types of anaphors and the samples used for evaluation. The extensive comparative evaluation of the pronoun resolution algorithms showed that their performance was significantly lower than the figures reported in the original papers describing the algorithms. The evaluation study concluded that the main reason for this drop in performance is the fact that all algorithms operate in a fully automatic mode. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
23. SORTAL ANAPHORA RESOLUTION IN MEDLINE ABSTRACTS.
- Author
-
Torii, Manabu and Vijay-Shanker, K.
- Subjects
ANAPHORA (Linguistics) ,MEDLINE ,ABSTRACTS ,BIOINFORMATICS ,MACHINE learning ,MACHINE theory ,ARTIFICIAL intelligence ,NATURAL language processing - Abstract
This paper reports our investigation of machine learning methods applied to anaphora resolution for biology texts, particularly paper abstracts. Our primary concern is the investigation of features and their combinations for effective anaphora resolution. In this paper, we focus on the resolution of demonstrative phrases and definite determiner phrases, the two most prevalent forms of anaphoric expressions that we find in biology research articles. Different resolution models are developed for demonstrative and definite determiner phrases. Our work shows that models may be optimized differently for each of the phrase types. Also, because a significant number of definite determiner phrases are not anaphoric, we induce a model to detect anaphoricity, i.e., a model that classifies phrases as either anaphoric or nonanaphoric. We propose several novel features that we call highlighting features, and consider their utility particularly for processing paper abstracts. The system using the highlighting features achieved accuracies of 78% and 71% for demonstrative phrases and definite determiner phrases, respectively. The use of the highlighting features reduced the error rate by about 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
24. FAST AND ROBUST INCREMENTAL ACTION PREDICTION FOR INTERACTIVE AGENTS.
- Author
-
Dinerstein, Jonathan, Ventura, Dan, and Egbert, Parris K.
- Subjects
INTELLIGENT agents ,ARTIFICIAL intelligence software ,COMPUTER software ,ARTIFICIAL intelligence ,AUTOMATION ,MOBILE agent systems - Abstract
The ability for a given agent to adapt on-line to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, because humans learn quickly and behave nondeterministically. In this paper, we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a case-based approach, where the behavior of the other agent is learned in the form of state–action pairs. We generalize these cases either through continuousk-nearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
25. Integration of AI with reduced order generalized integrator controller for power system harmonic reduction.
- Author
-
Mandava, Srihari, Medarametla, Praveen Kumar, Gudipalli, Abhishek, Saravanan, M, and Sudheer, P
- Subjects
INTEGRATORS ,REACTIVE power ,FUZZY logic ,ELECTRIC power filters ,ARTIFICIAL intelligence ,VOLTAGE control - Abstract
The increased use of electronics for control and use of nonlinear type loads by consumers in the present power system network are injecting harmonics into the power signal, due to which the power quality issue has become more challenge for the present researchers. In this work, the reduced order generalized integrator (ROGI) and fuzzy logic control (FLC) are collectively used to reduce the current harmonics in the power system. FLC‐SVM technique is used to maintain the process of SAF—Shunt active filter with fixed switching frequency. This proposed control technique consists of control loop for current to have fast and effective control. The Proportional Resonant controller is used to have control on voltage for a slow and effective control process which is also used to compensate reactive power. The shunt active filter is designed with the help of ROGI by integrating PI controller for fuzzy‐based SVM to reduce the current harmonics on the load side. The results are achieved in MATLAB/SIMULINK software. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. High‐speed and area‐efficient Sobel edge detector on field‐programmable gate array for artificial intelligence and machine learning applications.
- Author
-
Sikka, Prateek, Asati, Abhijit R, and Shekhar, Chandra
- Subjects
ARTIFICIAL intelligence ,GATE array circuits ,MACHINE learning ,COMPUTER vision ,ALGORITHMS - Abstract
Sobel edge detector is an algorithm commonly used in image processing and computer vision to extract edges from input images using derivative of image pixels in x and y directions against surrounding pixels. Most artificial intelligence and machine learning applications require image processing algorithms running in real time on hardware systems like field‐programmable gate array (FPGAs). They typically require high throughput to match real‐time speeds and since they run alongside other processing algorithms, they are required to be area efficient as well. This article proposes a high‐speed and low‐area implementation of the Sobel edge detection algorithm. We created the design using a novel high‐level synthesis (HLS) design method based on application specific bit widths for intermediate data nodes. Register transfer level code was generated using MATLAB hardware description language (HDL) coder for HLS. The generated HDL code was implemented on Xilinx Kintex 7 field programmable gate array (FPGA) using Xilinx Vivado software. Our implementation results are superior to those obtained for similar implementations using the vendor library block sets as well as those obtained by other researchers using similar implementations in the recent past in terms of area and speed. We tested our algorithm on Kintex 7 using real‐time input video with a frame resolution of 1920 × 1080. We also verified the functional simulation results with a golden MATLAB implementation using FPGA in the loop feature of HDL Verifier. In addition, we propose a generic area, speed, and power improvement methodology for different HLS tools and application designs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. Mental Computation and Language Breakdown: Clarifications, Extensions, and Responses.
- Author
-
Frawley, William
- Subjects
COMPUTATIONAL intelligence ,ARTIFICIAL intelligence - Abstract
This paper is a response to commentaries on my target paper for Computational Intelligence, “Control and Cross-Domain Mental Computation: Evidence from Language Breakdown.” In this response, I acknowledge certain errors in my initial construal of control and dismiss unwarranted criticisms. I then reexamine both control and certain language disorders in light of the explicitness of cross-domain communication and the visibility of representations to each other. In the end, I reassert the validity of the logic/control (visibility) distinction in mental computation and argue that the contrasts between Specific Language Impairment and Williams syndrome parallel this distinction. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
28. DISCOVERING ROBUST EMBEDDINGS IN (DIS)SIMILARITY SPACE FOR HIGH-DIMENSIONAL LINGUISTIC FEATURES.
- Author
-
Mu, Tingting, Miwa, Makoto, Tsujii, Junichi, and Ananiadou, Sophia
- Subjects
NATURAL language processing ,ALGORITHMS ,KERNEL functions ,HUMAN-computer interaction ,ARTIFICIAL intelligence ,COMPUTATIONAL linguistics - Abstract
Recent research has shown the effectiveness of rich feature representation for tasks in natural language processing (NLP). However, exceedingly large number of features do not always improve classification performance. They may contain redundant information, lead to noisy feature presentations, and also render the learning algorithms intractable. In this paper, we propose a supervised embedding framework that modifies the relative positions between instances to increase the compatibility between the input features and the output labels and meanwhile preserves the local distribution of the original data in the embedded space. The proposed framework attempts to support flexible balance between the preservation of intrinsic geometry and the enhancement of class separability for both interclass and intraclass instances. It takes into account characteristics of linguistic features by using an inner product-based optimization template. (Dis)similarity features, also known as empirical kernel mapping, is employed to enable computationally tractable processing of extremely high-dimensional input, and also to handle nonlinearities in embedding generation when necessary. Evaluated on two NLP tasks with six data sets, the proposed framework provides better classification performance than the support vector machine without using any dimensionality reduction technique. It also generates embeddings with better class discriminability as compared to many existing embedding algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. REPLANNING MECHANISM FOR DELIBERATIVE AGENTS IN DYNAMIC CHANGING ENVIRONMENTS.
- Author
-
Corchado, J. M., Glez-Bedia, M., De Paz, Y., Bajo, J., and De Paz, J. F.
- Subjects
PLANNING ,REASONING ,CONSTRAINT satisfaction ,ARTIFICIAL intelligence ,SIMULATION methods & models ,RESEARCH methodology - Abstract
This paper proposes a replanning mechanism for deliberative agents as a new approach to tackling the frame problem. We propose a beliefs desires and intentions (BDI) agent architecture using a case-based planning (CBP) mechanism for reasoning. We discuss the characteristics of the problems faced with planning where constraint satisfaction problems (CSP) resources are limited and formulate, through variation techniques, a reasoning model agent to resolve them. The design of the agent proposed, named MRP-Ag (most-replanable agent), will be evaluated in different environments using a series of simulation experiments, comparing it with others such as E-Ag (Efficient Agent) and O-Ag (Optimum Agent). Last, the most important results will be summarized, and the notion of an adaptable agent will be introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
30. LEARNING STRUCTURED BAYESIAN NETWORKS: COMBINING ABSTRACTION HIERARCHIES AND TREE-STRUCTURED CONDITIONAL PROBABILITY TABLES.
- Author
-
Desjardins, Marie, Rathod, Priyang, and Getoor, Lise
- Subjects
MACHINE learning ,BAYESIAN analysis ,CLUSTER analysis (Statistics) ,COMPUTATIONAL intelligence ,COMPUTATIONAL learning theory ,ARTIFICIAL intelligence - Abstract
Context-specific independence representations, such as tree-structured conditional probability distributions, capture local independence relationships among the random variables in a Bayesian network (BN). Local independence relationships among the random variables can also be captured by using attribute-value hierarchies to find an appropriate abstraction level for the values used to describe the conditional probability distributions. Capturing this local structure is important because it reduces the number of parameters required to represent the distribution. This can lead to more robust parameter estimation and structure selection, more efficient inference algorithms, and more interpretable models. In this paper, we introduce Tree-Abstraction-Based Search (TABS), an approach for learning a data distribution by inducing the graph structure and parameters of a BN from training data. TABS combines tree structure and attribute-value hierarchies to compactly represent conditional probability tables. To construct the attribute-value hierarchies, we investigate two data-driven techniques: a global clustering method, which uses all of the training data to build the attribute-value hierarchies, and can be performed as a preprocessing step; and a local clustering method, which uses only the local network structure to learn attribute-value hierarchies. We present empirical results for three real-world domains, finding that (1) combining tree structure and attribute-value hierarchies improves the accuracy of generalization, while providing a significant reduction in the number of parameters in the learned networks, and (2) data-derived hierarchies perform as well or better than expert-provided hierarchies. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
31. LEARNING TO SUPPORT CONSTRAINT PROGRAMMERS.
- Author
-
Epstein, Susan L., Freuder, Eugene C., and Wallace, Richard J.
- Subjects
CONSTRAINT satisfaction ,CONSTRAINT programming ,MACHINE learning ,ARTIFICIAL intelligence ,COMPUTATIONAL intelligence ,COMPUTER programming - Abstract
This paper describes the Adaptive Constraint Engine (ACE), an ambitious ongoing research project to support constraint programmers, both human and machine. The program begins with substantial knowledge about constraint satisfaction. The program harnesses a cognitively-oriented architecture (FORR) to manage search heuristics and to learn new ones. ACE can transfer what it learns on simple problems to solve more difficult ones, and can readily export its knowledge to ordinary constraint solvers. It currently serves both as a learner and as a test bed for the constraint community. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
32. APPLYING MACHINE LEARNING TO LOW-KNOWLEDGE CONTROL OF OPTIMIZATION ALGORITHMS.
- Author
-
Carchrae, Tom and Beck, J. Christopher
- Subjects
MACHINE learning ,COMPUTER algorithms ,MATHEMATICAL optimization ,PRODUCTION scheduling ,ARTIFICIAL intelligence ,COMPUTATIONAL intelligence - Abstract
This paper addresses the question of allocating computational resources among a set of algorithms to achieve the best performance on scheduling problems. Our primary motivation in addressing this problem is to reduce the expertise needed to apply optimization technology. Therefore, we investigate algorithm control techniques that make decisions based only on observations of the improvement in solution quality achieved by each algorithm. We call our approach “low knowledge” since it does not rely on complex prediction models, either of the problem domain or of algorithm behavior. We show that a low-knowledge approach results in a system that achieves significantly better performance than all of the pure algorithms without requiring additional human expertise. Furthermore the low-knowledge approach achieves performance equivalent to a perfect high-knowledge classification approach. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
33. LEARNING PRECONDITIONS FOR PLANNING FROM PLAN TRACES AND HTN STRUCTURE.
- Author
-
Ilghami, Okhtay, Nau, Dana S., Muñoz-Avila, Héctor, and Aha, David W.
- Subjects
PLANNING ,ARTIFICIAL intelligence ,COMPUTER algorithms ,MACHINE learning ,COMPUTATIONAL intelligence - Abstract
A great challenge in developing planning systems for practical applications is the difficulty of acquiring the domain information needed to guide such systems. This paper describes a way to learn some of that knowledge. More specifically, the following points are discussed. (1) We introduce a theoretical basis for formally defining algorithms that learn preconditions for Hierarchical Task Network (HTN) methods. (2) We describe Candidate Elimination Method Learner ( CaMeL), a supervised, eager, and incremental learning process for preconditions of HTN methods. We state and prove theorems about CaMeL's soundness, completeness, and convergence properties. (3) We present empirical results about CaMeL's convergence under various conditions. Among other things, CaMeL converges the fastest on the preconditions of the HTN methods that are needed the most often. Thus CaMeL's output can be useful even before it has fully converged. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
34. THE DISCIPLE–RKF LEARNING AND REASONING AGENT.
- Author
-
Tecuci, Gheorghe, Boicu, Mihai, Boicu, Cristina, Marcu, Dorin, Stanescu, Bogdan, and Barbulescu, Marcel
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,PROBLEM solving ,LEARNING strategies ,REASONING - Abstract
Over the years we have developed the Disciple theory, methodology, and family of tools for building knowledge-based agents. This approach consists of developing an agent shell that can be taught directly by a subject matter expert in a way that resembles how the expert would teach a human apprentice when solving problems in cooperation. This paper presents the most recent version of the Disciple approach and its implementation in the Disciple–RKF (rapid knowledge formation) system. Disciple–RKF is based on mixed-initiative problem solving, where the expert solves the more creative parts of the problem and the agent solves the more routine ones, integrated teaching and learning, where the agent helps the expert to teach it, by asking relevant questions, and the expert helps the agent to learn, by providing examples, hints, and explanations, and multistrategy learning, where the agent integrates multiple learning strategies, such as learning from examples, learning from explanations, and learning by analogy, to learn from the expert how to solve problems. Disciple–RKF has been applied to build learning and reasoning agents for military center of gravity analysis, which are used in several courses at the US Army War College. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
35. A CONSTRAINED ARCHITECTURE FOR LEARNING AND PROBLEM SOLVING.
- Author
-
Jones, Randolph M. and Langley, Pat
- Subjects
COMPUTATIONAL intelligence ,PROBLEM solving ,ARTIFICIAL intelligence ,MEMORY ,COGNITIVE learning - Abstract
This paper describes Eureka, a problem-solving architecture that operates under strong constraints on its memory and processes. Most significantly, Eureka does not assume free access to its entire long-term memory. That is, failures in problem solving may arise not only from missing knowledge, but from the (possibly temporary) inability to retrieve appropriate existing knowledge from memory. Additionally, the architecture does not include systematic backtracking to recover from fruitless search paths. These constraints significantly impact Eureka's design. Humans are also subject to such constraints, but are able to overcome them to solve problems effectively. In Eureka's design, we have attempted to minimize the number of additional architectural commitments, while staying faithful to the memory constraints. Even under such minimal commitments, Eureka provides a qualitative account of the primary types of learning reported in the literature on human problem solving. Further commitments to the architecture would refine the details in the model, but the approach we have taken de-emphasizes highly detailed modeling to get at general root causes of the observed regularities. Making minimal additional commitments to Eureka's design strengthens the case that many regularities in human learning and problem solving are entailments of the need to handle imperfect memory. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
36. ON KNOWLEDGE GRID AND GRID INTELLIGENCE: A SURVEY.
- Author
-
Cheung, William K. and Jiming Liu
- Subjects
GRID computing ,COMPUTER systems ,ARTIFICIAL intelligence ,KNOWLEDGE management ,COMPUTER networks - Abstract
The next generation Web Intelligence (WI) aims at enabling users to go beyond the existing online information search and knowledge queries functionalities and to gain, from the Web, practical wisdom for problem solving. To support such a Wisdom Web, we envision that a grid-like computing infrastructure with intelligent service agencies is needed, where these agencies can interact, self-organize, learn, and evolve their course of actions, identities, and interrelationships for new knowledge creation, as well as scientific and social evolution. In this paper, we first provide an overview of recent development in WI and Semantic/Knowledge Grid. Then, the fundamental capabilities of the Wisdom Web as well as the conceptual architecture of an intelligent Grid for supporting it are described. Technical challenges for realizing Grid Intelligence are highlighted and the recent advancements in related research areas are reviewed. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
37. FEATURE-BASED KOREAN GRAMMAR UTILIZING LEARNED CONSTRAINT RULES.
- Author
-
So-Young Park, Yong-Jae Kwak, Hae-Chang Rim, and Heui-Seok Lim
- Subjects
NATURAL language processing ,ARTIFICIAL intelligence ,ELECTRONIC data processing ,ALGORITHMS ,KOREAN language - Abstract
In this paper, we propose a feature-based Korean grammar utilizing the learned constraint rules in order to improve parsing efficiency. The proposed grammar consists of feature structures, feature operations, and constraint rules; and it has the following characteristics. First, a feature structure includes several features to express useful linguistic information for Korean parsing. Second, a feature operation generating a new feature structure is restricted to the binary-branching form which can deal with Korean properties such as variable word order and constituent ellipsis. Third, constraint rules improve efficiency by preventing feature operations from generating spurious feature structures. Moreover, these rules are learned from a Korean treebank by a decision tree learning algorithm. The experimental results show that the feature-based Korean grammar can reduce the number of candidates by a third of candidates at most and it runs 1.5∼ 2 times faster than a CFG on a statistical parser. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
38. Preference-Based Constrained Optimization with CP-Nets.
- Author
-
Boutilier, Craig, Brafman, Ronen I., Domshlak, Carmel, Hoos, Holger H., and Poole, David
- Subjects
ARTIFICIAL intelligence ,COMPUTATIONAL intelligence ,MATHEMATICAL optimization ,PARETO optimum ,CONSTRAINT satisfaction - Abstract
Many artificial intelligence (AI) tasks, such as product configuration, decision support, and the construction of autonomous agents, involve a process of constrained optimization, that is, optimization of behavior or choices subject to given constraints. In this paper we present an approach for constrained optimization based on a set of hard constraints and a preference ordering represented using a CP-network—a graphical model for representing qualitative preference information. This approach offers both pragmatic and computational advantages. First, it provides a convenient and intuitive tool for specifying the problem, and in particular, the decision maker's preferences. Second, it admits an algorithm for finding the most preferred feasible (Pareto-optimal) outcomes that has the following anytime property: the set of preferred feasible outcomes are enumerated without backtracking. In particular, the first feasible solution generated by this algorithm is Pareto optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
39. Multi-Agent Constraint Systems with Preferences: Efficiency, Solution Quality, and Privacy Loss.
- Author
-
Franzin, M. S., Rossi, F., Freuder, E. C., and Wallace, R.
- Subjects
CONSTRAINT satisfaction ,ARTIFICIAL intelligence ,MATHEMATICAL optimization ,PARETO optimum ,FUZZY systems - Abstract
In this paper, we consider multi-agent constraint systems with preferences, modeled as soft constraint systems in which variables and constraints are distributed among multiple autonomous agents. We assume that each agent can set some preferences over its local data, and we consider two different criteria for finding optimal global solutions: fuzzy and Pareto optimality. We propose a general graph-based framework to describe the problem to be solved in its generic form. As a case study, we consider a distributed meeting scheduling problem where each agent has a pre-existing schedule and the agents must decide on a common meeting that satisfies a given optimality condition. For this scenario we consider the topics of solution quality, search efficiency, and privacy loss, where the latter pertains to information about an agent's pre-existing meetings and available time-slots. We also develop and test strategies that trade efficiency for solution quality and strategies that minimize information exchange, including some that do not require inter-agent comparisons of utilities. Our experimental results demonstrate some of the relations among solution quality, efficiency, and privacy loss, and provide useful hints on how to reach a tradeoff among these three factors. In this work, we show how soft constraint formalisms can be used to incorporate preferences into multi-agent problem solving along with other facets of the problem, such as time and distance constraints. This work also shows that the notion of privacy loss can be made concrete so that it can be treated as a distinct, manipulable factor in the context of distributed decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
40. CBPOP: A Domain-Independent Multi-Case Reuse Planner.
- Author
-
Britanik, J. and Marefat, M.
- Subjects
PLANNING ,COMPUTATIONAL intelligence ,PLANNERS ,ARTIFICIAL intelligence ,INFORMATION retrieval - Abstract
The reuse of multiple cases to solve a single planning problem presents a promise of better utilization of past experience over single-reuse planning, which can lead to better planning performance. In this paper, we present the theory and implementation of CBPOP, and show how it addresses the multi-reuse planning problems. In particular, we present novel approaches to retrieval and refitting. We also explore the difficult issue of when to retrieve in multi-reuse scenarios, and we empirically compare the results of several solutions we propose. Results from our experiments show that the best ranking function for pure generative planning is not necessarily the best ranking function for multi-reuse planning. The surprising result in the reuse scenarios is that the single-goal case library performed better than larger case libraries consisting of solutions to multi-goal problems. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
41. Microplanning with Communicative Intentions: The SPUD System.
- Author
-
Stone, Matthew, Doran, Christine, Webber, Bonnie, Bleam, Tonia, and Palmer, Martha
- Subjects
NATURAL language processing ,ARTIFICIAL intelligence ,ONLINE data processing ,HUMAN-computer interaction ,MATHEMATICAL linguistics - Abstract
The process of microplanning in natural language generation (NLG) encompasses a range of problems in which a generator must bridge underlying domain-specific representations and general linguistic representations. These problems include constructing linguistic referring expressions to identify domain objects, selecting lexical items to express domain concepts, and using complex linguistic constructions to concisely convey related domain facts. In this paper, we argue that such problems are best solved through a uniform, comprehensive, declarative process. In our approach, the generator directly explores a search space for utterances described by a linguistic grammar. At each stage of search, the generator uses a model of interpretation, which characterizes the potential links between the utterance and the domain and context, to assess its progress in conveying domain-specific representations. We further address the challenges for implementation and knowledge representation in this approach. We show how to implement this approach effectively by using the lexicalized tree-adjoining grammar (LTAG) formalism to connect structure to meaning and using modal logic programming to connect meaning to context. We articulate a detailed methodology for designing grammatical and conceptual resources which the generator can use to achieve desired microplanning behavior in a specified domain. In describing our approach to microplanning, we emphasize that we are in fact realizing a deliberative process of goal-directed activity. As we formulate it, interpretation offers a declarative representation of a generator's communicative intent. It associates the concrete linguistic structure planned by the generator with inferences that show how the meaning of that structure communicates needed information about some application domain in the current discourse context. Thus, interpretations are plans that the microplanner constructs and outputs. At the same time, communicative intent representations provide a rich and uniform resource for the process of NLG. Using representations of communicative intent, a generator can augment the syntax, semantics, and pragmatics of an incomplete sentence simultaneously, and can work incrementally toward solutions for the various problems of microplanning. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
42. Utility Models for Goal-Directed, Decision-Theoretic Planners.
- Author
-
Haddawy, Peter and Hanks, Steve
- Subjects
ARTIFICIAL intelligence ,MATHEMATICS ,PROBLEM solving - Abstract
AI planning agents are goal-directed : success is measured in terms of whether an input goal is satisfied. The goal gives structure to the planning problem, and planning representations and algorithms have been designed to exploit that structure. Strict goal satisfaction may be an unacceptably restrictive measure of good behavior, however. A general decision-theoretic agent, on the other hand, has no explicit goals: success is measured in terms of an arbitrary preference model or utility function defined over plan outcomes. Although it is a very general and powerful model of problem solving, decision-theoretic choice lacks structure, which can make it difficult to develop effective plan-generation algorithms. This paper establishes a middle ground between the two models. We extend the traditional AI goal model in several directions: allowing goals with temporal extent, expressing preferences over partial satisfaction of goals, and balancing goal satisfaction against the cost of the resources consumed in service of the goals. In doing so we provide a utility model for a goal-directed agent. An important quality of the proposed model is its tractability. We claim that our model, like classical goal models, makes problem structure explicit. This structure can then be exploited by a problem-solving algorithm. We support this claim by reporting on two implemented planning systems that adopt and exploit our model. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
43. Universality and prediction in business rules.
- Author
-
Wang, Olivier, de Sainte Marie, Christian, Ke, Changhai, and Liberti, Leo
- Subjects
INTERPRETERS (Computer programs) ,MACHINE learning ,SEMANTICS ,ARTIFICIAL intelligence - Abstract
Abstract: Business rules (BR) have the form ⟨ if condition then action⟩. A BR program, which can be executed by means of an interpreter, is a sequence of business rules. Motivated by International Business Machines use cases, we look at the problem of setting parameter values in a given BR program so it will achieve a given average goal over all possible instances. We explore the following fundamental question: Is there a general learning algorithm, which addresses this issue? We prove the answer is negative. On the positive side, we derive operational semantics for BR programs. As a proof of concept, we show empirically that these can be used to detect potential nontermination situations. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Reasoning about Locations in Theory and Practice.
- Author
-
Myers, Karen L. and Wilkins, David E.
- Subjects
REASONING ,MOTION ,ARTIFICIAL intelligence ,MODELS & modelmaking ,MATHEMATICAL models - Abstract
Locational reasoning plays an important role in many applications of AI problem-solving systems, yet has remained a relatively unexplored area of research. This paper addresses both theoretical and practical issues relevant to reasoning about locations. We define several theories of location designed for use in various settings, along with a sound and complete belief revision calculus for each that maintains a STRIPS-style database of locational facts. Techniques for the efficient operationalization of the belief revision rules in planning frameworks are presented. These techniques were developed during application of the location theories to several large-scale planning tasks within the Sipe planning framework. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
45. Process and Policy: Resource-Bounded NonDemonstrative Reasoning.
- Author
-
Loui, Ronald P.
- Subjects
DIALECTIC ,NONMONOTONIC logic ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
This paper investigates the appropriateness of formal dialectics as a basis for nonmonotonic reasoning and defeasible reasoning that takes computational limits seriously. Rules that can come into conflict should be regarded as policies, which are inputs to deliberative processes. Dialectical protocols are appropriate for such deliberations when resources are bounded and search is serial. AI, it is claimed here, is now perfectly positioned to correct many misconceptions about reasoning that have resulted from mathematical logic's enormous success in this century: among them, (1) that all reasons are demonstrative, (2) that rational belief is constrained, not constructed, and (3) that process and disputation are not essential to reasoning. AI mainly provides new impetus to formalize the alternative (but older) conception of reasoning, and AI provides mechanisms with which to create compelling formalism that describes the control of processes. The technical contributions here are: the partial justification of dialectic based on controlling search; the observation that nonmonotonic reasoning can be subsumed under certain kinds of dialectics; the portrayal of inference in knowledge bases as policy reasoning; the review of logics of dialogue and proposed extensions; and the preformal and initial formal discussion of aspects and variations of dialectical systems with nondemonstrative reasons. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
46. A blockchain framework data integrity enhanced recommender system.
- Author
-
Altulyan, May, Yao, Lina, Kanhere, Salil, and Huang, Chaoran
- Subjects
RECOMMENDER systems ,DATA integrity ,BLOCKCHAINS ,ARTIFICIAL intelligence ,BIG data ,TECHNOLOGICAL innovations ,DATA transmission systems ,CLOUD storage - Abstract
Recommender system for the IoT (RSIoT) has attracted considerable attention. By leveraging emerging technologies such as the Internet of Things (IoT), artificial intelligence, and blockchain, RSIoT improves various indicators of residents' life. However, data integrity threats may affect the accuracy and consistency of the data particularly in the IoT environment where most devices are inherently dynamic and have limited resources that could fail in ensuring the quality of data transmission. Prior work has focused on processing big data and ensuring their integrity by considering cloud storage service as the popular way. In this article, we address integrity of data leveraging blockchain capabilities to ensure the integrity of the critical data. We adapted the Ethereum blockchain to our RCS for ensuring integrity of data during sharing them between doctor and patient without handling their data by third party. We build four smart contracts that enable our system of gaining more advantage of blockchain. We evaluated the performance of our smart contracts in Kovan and Rinkeby test networks. The preliminary results show the feasibility and effectiveness of the proposed solution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. An effective context‐focused hierarchical mechanism for task‐oriented dialogue response generation.
- Author
-
Zhao, Meng, Jiang, Zejun, Wang, Lifang, Li, Ronghan, Lu, Xinyu, Hu, Zhongtian, and Chen, Daqing
- Subjects
ARTIFICIAL intelligence ,NATURAL language processing - Abstract
Task‐oriented dialogue system (TOD) is one kind of application of artificial intelligence (AI). The response generation module is a key component of TOD for replying to user's questions and concerns in sequential natural words. In the past few years, the works on response generation have attracted increasing research attention and have seen much progress. However, existing works ignore the fact that not each turn of dialogue history contributes to the dialogue response generation and give little consideration to the different weights of utterances in a dialogue history. In this article, we propose a hierarchical memory network mechanism with two steps to filter out unnecessary information of dialogue history. First, an utterance‐level memory network distributes various weights to each utterance (coarse‐grained). Second, a token‐level memory network assigns higher weights to keywords based on the former's output (fine‐grained). Furthermore, the output of the token‐level memory network will be employed to query the knowledge base (KB) to capture the dialogue‐related information. In the decoding stage, we take a gated‐mechanism to generate response word by word from dialogue history, vocabulary, or KB. Experiments show that the proposed model achieves superior results compared with state‐of‐the‐art models on several public datasets. Further analysis demonstrates the effectiveness of the proposed method and the robustness of the model in the case of an incomplete training set. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. An enhanced ensemble machine learning classification method to detect attention deficit hyperactivity for various artificial intelligence and telecommunication applications.
- Author
-
Sheriff, Meeran and Gayathri, Rajagopal
- Subjects
ATTENTION-deficit hyperactivity disorder ,ARTIFICIAL intelligence ,MENTAL illness ,TELECOMMUNICATION ,HYPERACTIVITY ,MACHINE learning - Abstract
Attention Deficit Hyperactivity Disorder (ADHD) is a common mental health disorder in teenager groups and it consists of a combination of problems. ADHD is a neurodevelopmental condition that manifests itself in children and adolescents as inattention, hyperactivity, and impulsivity. This research proposes a novel classification approach using BoostAlexNet model for ADHD automatic diagnosis. It consists of different pretrained methods like ResNet 101, NASNet, Xception, MobileNet, and InceptionV3. Based on the pretrained model, input MRI images are processed and integrated for the detection of abnormalities in ADHD MRI brain images of patients. The BoostAlexNet model is evaluated and comparatively observed with the existing techniques. The dataset for processing consists of 1359 CT images composed of ADHD and non‐ADHD. The validation range is set as 50 for each case with a total value of 150 and the network is trained with MRI images of 1069 for classification. The analysis of results expressed that BoostAlexNet exhibits higher accuracy, sensitivity, and specificity value of 93.67%, 0.93, and 0.97, respectively. The proposed BoostAlexNet classification technique achieves an accuracy of 93.67%. The developed model provides improved accuracy than the existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Empirical study on multiclass classification‐based network intrusion detection
- Author
-
Wisam Elmasry, Abdul Halim Zaim, Akhan Akbulut, and Bölüm Yok
- Subjects
particle swarm optimization ,business.industry ,Computer science ,cyber security ,Deep learning ,deep learning ,Particle swarm optimization ,Machine learning ,computer.software_genre ,Multiclass classification ,Computational Mathematics ,Empirical research ,Artificial Intelligence ,Network intrusion detection ,Artificial intelligence ,business ,computer ,network intrusion detection - Abstract
Early and effective network intrusion detection is deemed to be a critical basis for cybersecurity domain. In the past decade, although a significant amount of work has focused on network intrusion detection, it is still a challenge to establish an intrusion detection system with a high detection rate and a relatively low false alarm rate. In this paper, we have performed a comprehensive empirical study on network intrusion detection as a multiclass classification task, not just to detect a suspicious connection but also to assign the correct type as well. To surpass the previous studies, we have utilized four deep learning models, namely, deep neural networks, long short-term memory recurrent neural networks, gated recurrent unit recurrent neural networks, and deep belief networks. Our approach relies on the pretraining of the models by exploiting a particle swarm optimization–based algorithm for their hyperparameters selection. In order to investigate the performance differences, we also included two well-known shallow learning methods, namely, decision forest and decision jungle. Furthermore, we used in our experiments four datasets, which are dedicated to intrusion detection systems to explore various environments. These datasets are KDD CUP 99, NSL-KDD, CIDDS, and CICIDS2017. Moreover, 22 evaluation metrics are used to assess the model's performance in each of the datasets. Finally, intensive quantitative, Friedman test, and ranking methods analyses of our results are provided at the end of this paper. The results show a significant improvement in the detection of network attacks with our recommended approach. © 2019 Wiley Periodicals, Inc.
- Published
- 2019
50. Research on group behavior model based on neural network computing.
- Author
-
Wei, Jinfeng, Tian, Yuan, and Geng, Jingui
- Subjects
ARTIFICIAL neural networks ,BEHAVIORAL research ,SUPERVISED learning ,ARTIFICIAL intelligence ,RESEARCH teams - Abstract
Compute is a term coined from the etymology of French and Latin words computer and computare respectively, so is computing. This field of computing has grown enormously over the years. From the simple, traditional Turing machine invented in 1936 by Alan Turing to the current neural network (NN) computing. NNs, a field of artificial intelligence (AI) was exhilarated from the structure and inner workings of the brain. Just as the brain is, that is, an interconnection of neurons, so is the NN which is an interconnection of basic structures known as the perceptron. They do not differ much in structure. Their only difference is that one is artificial while the other is entirely biological. The hierarchical intricacies of the NN can be represented in three layers: the perceptron, artificial NN (ANN), and deep NN (DNN). With the influx of mental and behavioral disorders, basic surveillance, and the urgency to improve the mental health of people, studying the behavioral dynamics of people is requisite. CCTV and street cameras can only do so much, thus the need to employ the field of NN which makes use of supervised learning in training the models to perfect and automate surveillance. The results of this retrospective research indicate that the use of the NN model surpasses those of traditional methods in terms of efficiency and reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.