192 results
Search Results
2. A Color Image Encryption Algorithm with Cat Map and Chaos Map Embedded.
- Author
-
Li, Guodong and Han, Xuejuan
- Subjects
IMAGE encryption ,ALGORITHMS ,DIFFUSION processes ,INFORMATION technology security ,CATS - Abstract
In order to deal with the problem of encryption algorithms being overly simplistic, and the relatively low security of color images that creates potential to be attacked in the transmission process, this paper will introduce a new encryption algorithm that is designed to divide color images into R, G and B layers. In the scrambling operation: the first scrambling is aimed to block the clear text image scrambling; The second scrambling is the dynamic Arnold scrambling of the ciphertext after the first scrambling. In the diffusion operation, the scrambled ciphertext image was taken as the input, and the pseudo-random sequence generated by Tent mapping and Sine mapping was embedded. The sequence generated by Logistic mapping was used to select sub-blocks for block diffusion of the image. Tent-Sine mapping was applied to the second diffusion to obtain the final ciphertext image. The algorithm designed in this paper combines image block scrambling and dynamic Arnold scrambling, the scrambling degree of each layer of image pixels would be greatly improved, thus improving the security of color images. In the process of diffusion, chaos sequence is selected for diffusion operation, which increases the difficulty of decoding ciphertext. The simulation results show that the new algorithm has desirable encryption effect, strong key sensitivity and large key space, and complex encryption algorithm can effectively resist attacks, which certainly has value in image information security. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Solution of Uncertain Constrained Multi-Objective Travelling Salesman Problem with Aspiration Level Based Multi Objective Quasi Oppositional Jaya Algorithm.
- Author
-
Bajaj, Aaishwarya and Dhodiya, Jayesh
- Subjects
- *
LEVEL of aspiration , *ALGORITHMS , *TRAVELING salesman problem , *MEMBERSHIP functions (Fuzzy logic) - Abstract
Multi-Objective Travelling Salesman Problem (MOTSP) is one of the most crucial problems in realistic scenarios, and it is difficult to solve by classical methods. However, it can be solved by evolutionary methods. This paper investigates the Constrained Multi-Objective Travelling Salesman Problem (CMOTSP) and the Constrained Multi-Objective Solid Travelling Salesman Problem (CMOSTSP) under an uncertain environment with zigzag uncertain variables. To solve CMOTSP and CMOSTSP models under uncertain environment, the expected value and optimistic value models are developed using two different ranking criteria of uncertainty theory. The models are transformed to their deterministic forms using the fundamentals of uncertainty. The Models are solved using two solution methodologies Aspiration level-based Multi-Objective Quasi Oppositional Jaya Algorithm (AL-based MOQO Jaya) and Fuzzy Programming Technique (FPT) with linear membership function. Further, the numerical illustration is solved using both methodologies to demonstrate its application. The sensitivity of the OVM model's objective functions regarding confidence levels is also investigated to look at the variation in the objective function. The paper concludes that the developed approach has solved CMOTSP and CMOSTSP efficiently with an effective output and provides alternative solutions for decision-making to DM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Hybrid MRK-Means++ RBM Model: An Efficient Heart Disease Predicting System Using ModifiedRoughK-Means++ Algorithm and Restricted Boltzmann Machine.
- Author
-
Prasanna, Kamepalli S. L. and Challa, Nagendra Panini
- Subjects
BOLTZMANN machine ,HEART diseases ,MACHINE learning ,ALGORITHMS ,DEEP learning ,MEDICAL personnel - Abstract
The clinical diagnosis of heart disease in most situations is based on a difficult amalgamation of pathological and clinical information. Because of this complication, there is a significant level of curiosity among many diagnostic healthcare professionals and researchers who are keenly interested in the efficient, accurate, and early-stage forecasting of heart disease. Deep Learning Algorithms aid in the prediction of heart disease. The main focus of this paper is to develop a method for predicting heart disease through Modified Rough K means + + (MRK + +) clustering along with the Restricted Boltzmann Machine (RBM). This paper is categorized into two modules: (1) Propose a clustering component based on Modified Rough K-means + + ; (2) disease prediction based on RBM. The input Cleveland dataset is clustered using the stochastic probabilistic rough k-means + + clustering technique in the module for clustering. The clustered data is acquired and used in the RBM, and this hybrid structure is then used in the heart disease forecasting module. Throughout the testing procedure, the most valid result is chosen from the clustered test data, and the RBM classifier that correlates to the nearest cluster in the test data is based on the smallest distance or similar parameters. Furthermore, the output value is used to predict heart disease. There are three different types of experiments that are performed: In the first experiment comprises modifying the rough K-means + + clustering algorithm, the second experiment evaluates the classification result, and the third experiment suggests hybrid model representation. When the Hybrid Modified Rough k-means + + - RBM model is compared with any single model, it provides the highest accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Improved Meta-Heuristic Model for Text Document Clustering by Adaptive Weighted Similarity.
- Author
-
Venkanna, Gugulothu and Bharati, K. F.
- Subjects
- *
DOCUMENT clustering , *METAHEURISTIC algorithms , *CENTROID , *ALGORITHMS - Abstract
This paper intends to develop a novel framework for text document clustering with the aid of a new improved meta-heuristic algorithm. Initially, the features are selected from the text document by subjecting each word under Term Frequency-Inverse Document Frequency (TF-IDF) computation. Subsequently, centroid selection plays a vital role in cluster formation, which is done using a new Improved Lion Algorithm (LA) termed as Cross over probability-based LA model (CP-LA). As a novelty, this paper introduced a new inter and intracluster similarity model. Moreover, this centroid selection is made in such a way that the proposed adaptive weighted similarity should be minimal. Based on the characteristics of the document, the weights are automatically adapted with the similarity measure. The proposed adaptive weighted similarity function involves the inter-cluster, and intra-cluster similarity of both ordered and unordered documents. Finally, the superiority of the proposed over other models is proved under different performance measures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Performance Limit: Fuzzy Logic Based Anti-Collision Algorithm for Industrial Internet of Things Applications.
- Author
-
Zhong, Dongbo, Cui, Zhiyong, and Xie, Yufei
- Subjects
- *
INTERNET of things , *FUZZY logic , *SHORTWAVE radio , *ALGORITHMS , *ASSET management , *RADIO frequency identification systems - Abstract
Passive RFID has the advantages of rapid identification of multi-target objects and low implementation cost. It is the most critical technology in the Industrial Internet of Things information-gathering layer and is extensively applied in various industries, such as smart production, asset management, and monitoring. The signal collision caused by the communication between the reader/writer and tags sharing the same wireless channel has caused a series of problems, such as the reduction of the identification efficiency of the reader/writer and the increment of the missed reading rate, thus restricting the further development of RFID. At present, many hybrid anti-collision algorithms integrate the advantages of Aloha and TS algorithms to optimize RFID system performance, but these solutions also suffer from performance bottlenecks. In order to break through such performance bottleneck, based on the ISE-BS algorithm, we combined the sub-frame observation mechanism and the Q value adjustment strategy and proposed two hybrid anti-collision algorithms. The experimental results show that the two algorithms proposed in this paper have obvious advantages in system throughput, time efficiency and other metrics, surpassing existing UHF RFID anti-collision algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Bio-Inspired Algorithm Based Undersampling Approach and Ensemble Learning for Twitter Spam Detection.
- Author
-
Kiruthika Devi, K. and Sathish Kumar, G. A.
- Subjects
- *
PARTICLE swarm optimization , *SOCIAL media , *PLURALITY voting , *ALGORITHMS , *RANDOM forest algorithms , *INTERNATIONAL communication - Abstract
Currently, social media networks such as Facebook and Twitter have evolved into valuable platforms for global communication. However, due to their extensive user bases, Twitter is often misused by illegitimate users engaging in illicit activities. While there are numerous research papers available that delve into combating illegitimate users on Twitter, a common shortcoming in most of these works is the failure to address the issue of class imbalance, which significantly impacts the effectiveness of spam detection. Few other research works that have addressed class imbalance have not yet applied bio-inspired algorithms to balance the dataset. Therefore, we introduce PSOB-U, a particle swarm optimization-based undersampling technique designed to balance the Twitter dataset. In PSOB-U, various classifiers and metrics are employed to select majority samples and rank them. Furthermore, an ensemble learning approach is implemented to combine the base classifiers in three stages. During the training phase of the base classifiers, undersampling techniques and a cost-sensitive random forest (CS-RF) are utilized to address the imbalanced data at both the data and algorithmic levels. In the first stage, imbalanced datasets are balanced using random undersampling, particle swarm optimization-based undersampling, and random oversampling. In the second stage, a classifier is constructed for each of the balanced datasets obtained through these sampling techniques. In the third stage, a majority voting method is introduced to aggregate the predicted outputs from the three classifiers. The evaluation results demonstrate that our proposed method significantly enhances the detection of illegitimate users in the imbalanced Twitter dataset. Additionally, we compare our proposed work with existing models, and the predicted results highlight the superiority of our spam detection model over state-of-the-art spam detection models that address the class imbalance problem. The combination of particle swarm optimization-based undersampling and the ensemble learning approach using majority voting results in more accurate spam detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Vectorized Kernel-Based Fuzzy C-Means: a Method to Apply KFCM on Crisp and Non-Crisp Numbers.
- Author
-
Hossein-Abad, Hadi Mahdipour, Shabanian, Mohsen, and Kazerouni, Iman Abaspur
- Subjects
REMOTE-sensing images ,ALGORITHMS ,GENE expression ,SOIL mapping ,GENE clusters ,THEMATIC mapper satellite ,FUZZY numbers ,IMAGE segmentation - Abstract
Kernel methods are a class of algorithms for pattern analysis to robust them to noise, overlaps, outliers and also unequal sized clusters. In this paper, kernel-based fuzzy c-means (KFCM) method is extended to apply KFCM on any crisp and non-crisp input numbers only in a single structure. The proposed vectorized KFCM (VKFM) algorithm maps the input (crisp or non-crisp) features to crisp ones and applies the KFCM (with prototypes in feature space) on them. Finally the resulted crisp prototypes in the mapped space are influenced by an inverse mapping to obtain the prototypes' (centers') parameters in the input features space. The performance of the proposed method has been compared with the conventional FCM and KFCM and other new methods, to show its effectiveness in clustering of gene expression data and segmentation of land-cover using satellite images. Simulation results show good accuracy of proposed method in compare to other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Branch-and-Price Based Heuristic Algorithm for Fuzzy Multi-Depot Bus Scheduling Problem.
- Author
-
Saffarian, Mohsen, Niksirat, Malihe, Ghatee, Mehdi, and Nasseri, Seyed Hadi
- Subjects
- *
FUZZY algorithms , *HEURISTIC algorithms , *SCHEDULING , *BUSES , *CONTAINER terminals , *BUS transportation , *PROBLEM solving , *ALGORITHMS - Abstract
This paper deals with fuzzy multi-depot bus scheduling (FMDBS) problem in which the objective function and constraints are defined with fuzzy attributes. Credibility relation is used to formulate the problem as an integer multicommodity flow problem. A novel combination of branch-and-price and heuristic algorithms, is proposed to efficiently solve FMDBS problem. In the proposed algorithm, the heuristic algorithm is applied to generate initial columns for the column generation method. Also, a heuristic algorithm is used to improve the generated solutions in each node of the branch-and-price tree. Two sets of benchmark examples are applied to demonstrate the efficiency of the proposed algorithm for large-scale instances. Also, the algorithm is applied to solve the classical multi-depot bus scheduling problem. The results show that the proposed algorithm decreases integrality gap and computational time in comparison with the state-of-the-art algorithms and normal branch-and-price algorithm. Finally, as a case study, the bus schedules in Tehran BRT network are generated. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. An Improved Fuzzy Adaptive Firefly Algorithm-Based Hybrid Clustering Algorithms.
- Author
-
Agrawal, Anmol, Tripathy, B. K., and Thirunavukarasu, Ramkumar
- Subjects
FIREFLIES ,FUZZY algorithms ,ALGORITHMS ,IMAGE segmentation ,FUZZY sets ,CENTROID ,ROUGH sets - Abstract
The implication of firefly and fuzzy firefly optimization algorithms has been greatly witnessed in clustering techniques and extensively used in applications such as Image segmentation. Parameters such as step factor and attractiveness have been kept constant in these algorithms, which affect the convergence rate and accuracy of the clustering process. Though fuzzy adaptive firefly algorithm tackled this problem by making those parameters an adaptive one, issues such as low convergence rate, and provision of non-optimal solutions are still there. To tackle these issues, this paper proposed a novel fuzzy adaptive fuzzy firefly algorithm that significantly improves the accuracy and convergence rate while comparing with the existing optimization algorithms. Further, the proposed algorithm fused with existing hybrid clustering algorithms involving fuzzy set, intuitionistic fuzzy set, and rough set resulted in eight novel hybrid clustering algorithms which lead to better performance in optimizing the selection of initial centroids. To validate the proposal, experimental studies have been conducted on datasets found in bench-marking data repositories such as UCI, and Kaggle. The performance and accuracy evaluation of proposed algorithms have been carried out with the aid of seven accuracy measures. Results clearly indicate the improved accuracy and convergence rate of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Content and Location Based Point-of-Interest Recommendation System Using HITS Algorithm.
- Author
-
Vinodha, R. and Parvathi, R.
- Subjects
- *
RECOMMENDER systems , *DIGITAL technology , *ALGORITHMS , *SOFTWARE engineers , *DATA mining , *CELL phones , *USER-generated content , *SOFTWARE engineering - Abstract
A study of geographic information has become a significant field of concentrate in software engineering because of the expansion of much information created by electronic gadgets fit together geographic data from people, like advanced mobile phones and GPS gadgets. Area facts allow a deeper understanding of users' options and actions by bridging the gap between the physical and digital worlds. This expansion of wide geo-spatial datasets has inspired studies into novel recommender systems that aim to make users' travels and social interactions easier. This huge amount of data generated by these devices has prompted an increase in the number of research activities and procedures aimed at breaking down and recovering useful data based on these large datasets. The aim of this challenge is to study GPS directions from a variety of people, as well as examine and apply computational strategies to recover useful data from those directions, useful data from GPS directions, including areas of interest and people's proximity, and then create an instrument for information representation. This paper demonstrates how data mining techniques is used to recover valuable information from spatial data. As well as how such data can be useful in understanding people and areas within a district. Depending on the outcomes, we recommend a HITS (Hypertext Induced Topic Search) based POI recommendation calculation that can take into account the effect of social connections when recommending POIs to individual users. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. LEARNING DECISION REGIONS BASED ON ADAPTIVE ELLIPSOIDS.
- Author
-
YAO, LEEHTER and WENG, KUEI-SUNG
- Subjects
ELLIPSOIDS ,GENETIC algorithms ,CLUSTER analysis (Statistics) ,DECISION making ,ALGORITHMS ,DISTRIBUTION (Probability theory) - Abstract
A fuzzy classifier using multiple ellipsoids to approximate decision regions for classification is designed in this paper. To learn the sizes and orientations of ellipsoids, an algorithm called evolutionary ellipsoidal classification algorithm (EECA) that integrates the genetic algorithm (GA) with the Gustafson-Kessel algorithm (GKA) is proposed. Within EECA the GA is employed to learn the size of every ellipsoid. With the size of every ellipsoid encoded and intelligently estimated in the GA chromosome, GKA is utilized to learn the corresponding ellipsoid. GKA is able to adapt the distance norm to the underlying distribution of the prototype data points for an assigned ellipsoid size. A process called directed initialization is proposed to improve EECA's learning efficiency. Because EECA learns the data point distribution in every cluster by adjusting an ellipsoid with suitable size and orientation, the information contained in the ellipsoid is further utilized to improve the cluster validity. A cluster validity measure based on the ratio of summation for each intra-cluster scatter with respect to the inter-cluster separation is defined in this paper. The proposed cluster validity measure takes advantage of EECA's learning capability and serves as an effective index for determining the adequate number of ellipsoids required for classification. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
13. FUZZY EXTREME LEARNING MACHINE FOR A CLASS OF FUZZY INFERENCE SYSTEMS.
- Author
-
HAI-JUN RONG, GUANG-BIN HUANG, and YONG-QI LIANG
- Subjects
FUZZY systems ,MACHINE learning ,ALGORITHMS ,BATCH processing ,MATHEMATICAL proofs ,REGRESSION analysis - Abstract
Recently an Online Sequential Fuzzy Extreme Learning (OS-Fuzzy-ELM) algorithm has been developed by Rong et al. for the RBF-like fuzzy neural systems where a fuzzy inference system is equivalent to a RBF network under some conditions. In the paper the learning ability of the batch version of OS-Fuzzy-ELM, called as Fuzzy-ELM is further evaluated to train a class of fuzzy inference systems which can not be represented by the RBF networks. The equivalence between the output of the fuzzy system and that of a generalized Single-Hidden Layer Feedforward Network as presented in Huang et al. is shown first, which is then used to prove the validity of the Fuzzy-ELM algorithm. In Fuzzy-ELM, the parameters of the fuzzy membership functions are randomly assigned and then the corresponding consequent parameters are determined analytically. Besides an input variable selection method based on the correlation measure is proposed to select the relevant inputs as the inputs of the fuzzy system. This can avoid the exponential increase of number of fuzzy rules with the increase of dimension of input variables while maintaining the testing performance and reducing the computation burden. Performance comparison of Fuzzy-ELM with other existing algorithms is presented using some real-world regression benchmark problems. The results show that the proposed Fuzzy-ELM produces similar or better accuracies with a significantly lower training time. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. MPE Computation in Bayesian Networks Using Mini-Bucket and Probability Trees Approximation.
- Author
-
Cano, Andrés, Gómez-Olmedo, Manuel, Moral, Serafín, and Moral-García, Serafín
- Subjects
ALGORITHMS ,CONDITIONAL probability ,PROBABILITY theory ,BAYESIAN analysis - Abstract
Given a set of uncertain discrete variables with a joint probability distribution and a set of observations for some of them, the most probable explanation is a set or configuration of values for non-observed variables maximizing the conditional probability of these variables given the observations. This is a hard problem which can be solved by a deletion algorithm with max marginalization, having a complexity similar to the one of computing conditional probabilities. When this approach is unfeasible, an alternative is to carry out an approximate deletion algorithm, which can be used to guide the search of the most probable explanation, by using A
* or branch and bound (the approximate+search approach). The most common approximation procedure has been the mini-bucket approach. In this paper it is shown that the use of probability trees as representation of potentials with a pruning of branches with similar values can improve the performance of this procedure. This is corroborated with an experimental study in which computation times are compared using randomly generated and benchmark Bayesian networks from UAI competitions. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
15. HANDLING HIGHLY-DIMENSIONAL CLASSIFICATION TASKS WITH HIERARCHICAL GENETIC FUZZY RULE-BASED CLASSIFIERS.
- Author
-
STAVRAKOUDIS, DIMITRIS G. and THEOCHARIS, JOHN B.
- Subjects
GENETIC algorithms ,FUZZY systems ,PARTITIONS (Mathematics) ,PERFORMANCE evaluation ,HIERARCHICAL Bayes model ,ALGORITHMS ,MATHEMATICAL analysis - Abstract
Many modern classification tasks are defined in highly-dimensional feature spaces. The derivation of high-performing genetic fuzzy rule-based classification systems (GFRBCSs) in such scenarios is a non-trivial task. This paper presents a framework for increasing the performance of GFRBCSs by creating a hierarchical fuzzy rule-based classifier. The proposed system is constructed through repeated invocations to a base GFRBCS procedure, considering at each step an input space fuzzy partition of a certain granularity. The best performing rules are inserted in the hierarchical rule base and the process is repeated again, considering a thicker granularity. The employed boosting scheme guides the algorithm in creating new rules to treat uncovered or misclassified patterns, thus monotonically increasing the performance of the classifier. Extensive experimental analysis in a number of real-world high-dimensional classification tasks proves the effectiveness of the proposed approach in increasing the performance of the base classifier, maintaining its interpretability to a considerable degree. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
16. A NEW APPROACH FOR SOLVING THE MINIMUM COST FLOW PROBLEM WITH INTERVAL AND FUZZY DATA.
- Author
-
GHIYASVAND, MEHDI
- Subjects
FUZZY numbers ,SET theory ,COST control ,DATA analysis ,MATHEMATICAL analysis ,ALGORITHMS - Abstract
In particular, imprecise observations or possible perturbations mean that data in a network flows may well be better represented by intervals or fuzzy numbers than crisp quantities. In this paper we first consider the minimum cost flow problem with compact interval-valued lower and upper bounds, flows, and costs. We present a new method that shows this problem is solved using two minimum cost flow problems with crisp data. Then this result is extended to networks with fuzzy lower and upper bounds, flows, and costs. One of the best algorithms to solve the minimum cost flow problem with crisp data is the cost scaling algorithm of Goldberg and Tarjan.
17 In this paper, the cost scaling algorithm is modified for fuzzy lower and upper bounds, flows and costs. The running time of the modified algorithm is equal to the running time of the cost scaling algorithm with crisp data. [ABSTRACT FROM AUTHOR]- Published
- 2011
- Full Text
- View/download PDF
17. DYNAMIC SIMILARITY METRIC USING FUZZY PREDICATES FOR CASE-BASED PLANNING.
- Author
-
OWAIS, M. A. and AHMED, M. A.
- Subjects
CASE-based reasoning ,FUZZY logic ,ALGORITHMS ,PROBLEM solving ,FUZZY systems - Abstract
Case-based planning (CBP) is a knowledge-based planning technique which develops new plans by reusing its past experience instead of planning from scratch. The task of CBP becomes difficult when the knowledge needed for planning can not be expressed precisely. In this paper, we tackle this issue by modeling imprecise information using fuzzy predicates; and accordingly, we present a dynamic similarity metric for efficient and effective retrieval of relevant cases from a library of cases. We also present weight adaptation algorithm to allow improving the performance of the metric overtime. We use and compare the performance of Tabu search, simulated annealing, and exhaustive search algorithms in instantiating fuzzy predicates to achieve maximum similarity between a new problem and a case. Our experiments show that the proposed metric is sound. The metric along with the adaptation algorithm have been shown to be promising when compared to others. Experiments also show that simulated annealing is more efficient than Tabu search and exhaustive search in fuzzy predicates instantiation. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
18. FUZZY CLASS LOGISTIC REGRESSION ANALYSIS.
- Author
-
Mii-Shen Yang and Hwei-Ming Chen
- Subjects
FUZZY sets ,REGRESSION analysis ,ALGORITHMS ,PARAMETER estimation ,ANALYSIS of variance ,MATHEMATICAL statistics ,MULTIVARIATE analysis - Abstract
Distribution mixtures are used as models to analyze grouped data. The estimation of parameters is an important step for mixture distributions. The latent class model is generally used as the analysis of mixture distributions for discrete data. In this paper, we consider the parameter estimation for a mixture of logistic regression models. We know that the expectation maximization (EM) algorithm was most used for estimating the parameters of logistic regression mixture models. In this paper, we propose a new type of fuzzy class model and then derive an algorithm for the parameter estimation of a fuzzy class logistic regression model. The effects of the explanatory variables on the response variables are described. The focus is on binary responses for the logistic regression mixture analysis with a fuzzy class model. An algorithm, called a fuzzy classification maximum likelihood (FCML), is then created. The mean squared error (MSE) based accuracy criterion for the FC)ML and EM algorithms to the parameter estimation of logistic regression mixture models are compared using the samples drawn from logistic regression mixtures of two classes. Numerical results show that the proposed FCML algorithm presents good accuracy and is recommended as a new tool for the parameter estimation of the logistic regression mixture models. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
19. Enhanced CRNN-Based Optimal Web Page Classification and Improved Tunicate Swarm Algorithm-Based Re-Ranking.
- Author
-
Yasin, Syed Ahmed and Prasada Rao, P. V. R. D.
- Subjects
- *
WEBSITES , *FEATURE selection , *FEATURE extraction , *CLASSIFICATION , *INTERNET searching , *ALGORITHMS - Abstract
The main intention of this paper is to develop a new intelligent framework for web page classification and re-ranking. The two main phases of the proposed model are (a) classification, and (b) re-ranking-based retrieval. In the classification phase, pre-processing is initially performed, which follows the steps like HTML (Hyper Text Markup Language) tag removal, punctuation marks removal, stop words removal, and stemming. After pre-processing, word to vector formation is done and then, feature extraction is performed by Principle Component Analysis (PCA). From this, optimal feature selection is accomplished, which is the important process for the accurate classification of web pages. Web pages contain several features, which reduces the classification accuracy. Here, the adoption of a new meta-heuristic algorithm termed Opposition based-Tunicate Swarm Algorithm (O-TSA) is employed to perform the optimal feature selection. Finally, the selected features are subjected to the Enhanced Convolutional-Recurrent Neural Network (E-CRNN) for accurate web page classification with enhancement based on O-TSA. The outcome of this phase is the categorization of different web page classes. In the second phase, the re-ranking is involved utilizing the O-TSA, which derives the objective function based on similarity function (correlation) for URL matching, which results in optimal re-ranking of web pages for retrieval. Thus, the proposed method yields better classification and re-ranking performance and reduce space requirements and search time in the web documents compared with the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. MIGR: A Categorical Data Clustering Algorithm Based on Information Gain in Rough Set Theory.
- Author
-
Raheem, Saddam, Al Shehabi, Shadi, and Mohi Nassief, Amaal
- Subjects
- *
ROUGH sets , *ENTROPY (Information theory) , *ALGORITHMS - Abstract
Clustering techniques are used to split data into clusters where each cluster contains elements that look more similar to elements in the same cluster than elements in other clusters. Some of these techniques are capable of handling clustering process uncertainty, while other techniques may have stability issues. In this paper, a novel method, called Minimum Information Gain Roughness (MIGR), is proposed to select the clustering attribute based on information entropy with rough set theory. To evaluate its performance, three benchmark UCI datasets are chosen to be clustered by using MIGR. Then, the resulting clusters are compared to those which are resulted from applying Min-Min-Rough (MMR) and information-theoretic dependency roughness (ITDR) algorithms. Both last-mentioned techniques were already compared with a variety of clustering algorithms like k-modes, fuzzy centroids, and fuzzy k-modes. The Global purity, the overall purity, and F-measure are considered here as performance measures to compare the quality of the resulting clusters. The experimental results show that the MIGR algorithm outperforms both MMR and ITDR algorithms for clustering categorical data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Nonlinear System Identification Using Clustering Algorithm Based on Kernel Method and Particle Swarm Optimization.
- Author
-
Ahmed, Troudi, Mohamed, Bouzbida, and Abdelkader, Chaari
- Subjects
NONLINEAR systems ,CLUSTER analysis (Statistics) ,ALGORITHMS ,KERNEL (Mathematics) ,PARTICLE swarm optimization ,PARAMETERS (Statistics) - Abstract
Many clustering algorithms have been proposed in literature to identify the parameters involved in the Takagi-Sugeno fuzzy model, we can quote as an example the Fuzzy C-Means algorithm (FCM), the Possibilistic C-Means algorithm (PCM), the Allied Fuzzy C-Means algorithm (AFCM), the NEPCM algorithm and the KNEPCM algorithm. The main drawback of these algorithms is the sensitivity to initialization and the convergence to a local optimum of the objective function. In order to overcome these problems, the particle swarm optimization is proposed. Indeed, the particle swarm optimization is a global optimization technique. Thus, the incorporation of local research capacity of the KNEPCM algorithm and the global optimization ability of the PSO algorithm can solve these problems. In this paper, a new clustering algorithm called KNEPCM-PSO is proposed. This algorithm is a combination between Kernel New Extended Possibilistic C-Means algorithm (KNEPCM) and Particle Swarm Optimization (PSO). The effectiveness of this algorithm is tested on nonlinear systems and on an electro-hydraulic system. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
22. An Approach for Interesting Subgraph Mining from Web Log Data Using W-Gaston Algorithm.
- Author
-
Jayalakshmi, N., Padmaja, P., and Jaya Suma, G.
- Subjects
BLOGS ,DATA logging ,INFORMATION retrieval ,DATA mining ,ALGORITHMS - Abstract
Graph-Based Data Mining (GBDM) is an emerging research topic nowadays, for the retrieval of the essential information from the graph database. There exist many algorithms that find frequent patterns in a given graph database. One such algorithm, GASTON uses support based on frequency to discover frequent patterns. The discovery phase in the Gaston algorithm is time-consuming, and the pages captured the interest of the users are ignored by the existing GASTON algorithm. This paper proposes an algorithm, Weighted-Gaston (W-Gaston) algorithm, by modifying the existing Gaston algorithm. Here, four interesting measures are developed based on the frequency, entropy, and the page duration, for the retrieval of the interesting sub-graphs. The proposed interesting measures include four types of support: (1) Support based on the page duration (W-Support), (2) Support based on the entropy (E-Support), (3) Support based on the page duration and the entropy (WE-Support), and (4) Support based on the frequency, page duration, and the entropy (FWE-Support). The simulation of the proposed work is done using the MSNBC and the weblog databases. The experimental results show that the proposed algorithm performed well as compared with the existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Particle Grey Wolf Optimizer (PGWO) Algorithm and Semantic Word Processing for Automatic Text Clustering.
- Author
-
Vidyadhari, Ch., Sandhya, N., and Premchand, P.
- Subjects
PARTICLE swarm optimization ,DOCUMENT clustering ,ALGORITHMS - Abstract
Text mining refers to the process of extracting the high-quality information from the text. It is broadly used in applications, like text clustering, text categorization, text classification, etc. Recently, the text clustering becomes the facilitating and challenging task used to group the text document. Due to some irrelevant terms and large dimension, the accuracy of text clustering is reduced. In this paper, the semantic word processing and novel Particle Grey Wolf Optimizer (PGWO) is proposed for automatic text clustering. Initially, the text documents are given as input to the pre-processing step which caters the useful keyword for feature extraction and clustering. Then, the resultant keyword is applied to wordnet ontology to find out the synonyms and hyponyms of every keyword. Subsequently, the frequency is determined for every keyword which is used to build the text feature library. Since the text feature library contains the larger dimension, the entropy is utilized to select the most significant feature. Finally, the new algorithm Particle Grey Wolf Optimizer (PGWO) is developed by integrating the particle swarm optimization (PSO) into the grey wolf optimizer (GWO). Thus, the proposed algorithm is used to assign the class labels to generate the different clusters of text documents. The simulation is performed to analyze the performance of the proposed algorithm, and the proposed algorithm is compared with existing algorithms. The proposed method attains the clustering accuracy of 80.36% for 20 Newsgroup dataset and the clustering accuracy of 79.63% for Reuter which ensures the better automatic text clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. EVOLVING EXTREME LEARNING MACHINE PARADIGM WITH ADAPTIVE OPERATOR SELECTION AND PARAMETER CONTROL.
- Author
-
KE LI, RAN WANG, SAM KWONG, and JINGJING CAO
- Subjects
MACHINE learning ,ADAPTIVE computing systems ,OPERATOR theory ,COMPUTER networks ,ALGORITHMS ,DIFFERENTIAL evolution - Abstract
Extreme Learning Machine (ELM) is an emergent technique for training Single-hidden Layer Feedforward Networks (SLFNs). It attracts significant interest during the recent years, but the randomly assigned network parameters might cause high learning risks. This fact motivates our idea in this paper to propose an evolving ELM paradigm for classification problems. In this paradigm, a Differential Evolution (DE) variant, which can online select the appropriate operator for offspring generation and adaptively adjust the corresponding control parameters, is proposed for optimizing the network. In addition, a 5-fold cross validation is adopted in the fitness assignment procedure, for improving the generalization capability. Empirical studies on several real-world classification data sets have demonstrated that the evolving ELM paradigm can generally outperform the original ELM as well as several recent classification algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
25. FUSION OF EXTREME LEARNING MACHINE WITH FUZZY INTEGRAL.
- Author
-
JUNHAI ZHAI, HONGYU XU, and YAN LI
- Subjects
DATA fusion (Statistics) ,MACHINE learning ,FUZZY integrals ,ARTIFICIAL neural networks ,SET theory ,ALGORITHMS - Abstract
Extreme learning machine (ELM) is an efficient and practical learning algorithm used for training single hidden layer feed-forward neural networks (SLFNs). ELM can provide good generalization performance at extremely fast learning speed. However, ELM suffers from instability and over-fitting, especially on relatively large datasets. Based on probabilistic SLFNs, an approach of fusion of extreme learning machine (F-ELM) with fuzzy integral is proposed in this paper. The proposed algorithm consists of three stages. Firstly, the bootstrap technique is employed to generate several subsets of original dataset. Secondly, probabilistic SLFNs are trained with ELM algorithm on each subset. Finally, the trained probabilistic SLFNs are fused with fuzzy integral. The experimental results show that the proposed approach can alleviate to some extent the problems mentioned above, and can increase the prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
26. Dynamic Network Interdiction Problem with Uncertain Data.
- Author
-
Soleimani-Alyar, Maryam and Ghaffari-Hadigheh, Alireza
- Subjects
DETERMINISTIC algorithms ,NONLINEAR programming ,MATHEMATICAL models ,ALGORITHMS ,MIXED integer linear programming - Abstract
This paper proposes an uncertain multi-period bi-level network interdiction problem with uncertain arc capacities. It is proved that there exists an equivalence relationship between uncertain multi-period network interdiction problem and the obtained deterministic correspondent. Application of the generalized Benders' decomposition algorithm is considered as the solution approach to the resulting mixed-integer nonlinear programming problem. Finally, a numerical example is presented to illustrate the model and the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. MODELING THE STABILITY OF A COMPUTER SYSTEM.
- Author
-
LOPEZ, VICTORIA and MIÑANA, GUADALUPE
- Subjects
COMPUTER systems ,PERFORMANCE ,RELIABILITY (Personality trait) ,ALGORITHMS ,DECISION making ,INFORMATION & communication technologies ,INTUITION - Abstract
Performance, reliability and safety are relevant factors when analyzing or designing a computer system. Many studies about on performance are based on monitoring and analyzing data from a computer system. One of the most useful pieces of data is the Load Average (LA) that which shows the load average of the system in the last minute, the sequence of in the last five minutes and the sequence of in the last fifteen last minutes. There are a lot ofmany studies of the system performance based on the load average. This is shown by mean means of monitoring the commands of the operative system, but sometimes they are sometimes difficult to understand and far of removed from human intuition. The aim of this paper is to show demonstrate a new procedure that allows us to determine the stability of a computer system from a list of load average sample data. The idea is shown as an algorithm based in statistic analysis, the aggregation of information and its formal specification. The result is an evaluation of the stability of the load and the computer system by monitoring but without adding any overhead to the system. In addition, the procedure can be used as a software monitor for risk prevention of on any vulnerable system. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
28. CLOUD DELPHI METHOD.
- Author
-
YANG, XIAO-JUN, ZENG, LUAN, and ZHANG, RAN
- Subjects
GROUP decision making ,DELPHI method ,PROBLEM solving ,UNCERTAINTY (Information theory) ,FUZZY expert systems ,MATHEMATICAL models ,ALGORITHMS - Abstract
Group decision making is an important category of problem solving techniques for complicated problems, among which the Delphi method has been widely applied. In this paper an improved Delphi method based on Cloud model is proposed in order to deal with the fuzziness and uncertainty in experts' subjective judgments. The proposed Cloud Delphi Method (CDM) describes experts' opinions by Cloud model and we aggregate the experts' Cloud opinions by synthetic algorithm and weighted average algorithm. Another key point of CDM is to stabilize and accommodate the individual fuzzy estimates by the defined stability rules rather than having to force them to converge, or reduce. The Cloud opinions and aggregation results can be exhibited in a graphically way leading experts to judge intuitively and it can decrease the number of repetitive surveys and/or interviews. Moreover, it is more scientific and easier to represent experts' opinion base on Cloud model which can combine fuzziness and uncertainty well. A numerical example is examined to demonstrate applicability and implementation process of CDM. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
29. COVARIANCE TRACKING WITH FORGETTING FACTOR AND RANDOM SAMPLING.
- Author
-
ZHANG, XUGUANG, LI, XIAOLI, LIANG, MING, and WANG, YANJIE
- Subjects
ANALYSIS of covariance ,ALGORITHMS ,ROBUST control ,FUZZY systems ,REAL-time control ,COMPUTER systems ,STATISTICAL sampling ,STATISTICAL matching - Abstract
Covariance matching is an excellent algorithm of target tracking. In this paper, forgetting factor and random sampling methods are proposed to improve the robustness and efficiency of covariance tracking. First, a distance function between covariance matrixes is weighted by using a forgetting factor based on a fuzzy membership function to overcome the disturbances from similar targets. Then a random sampling method is applied to reduce the computing time in covariance matching and to facilitate real-time object tracking. Experiment results show that the algorithm proposed in this paper can effectively mitigate the clutter and occlusion problems at a high computing speed. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
30. INTUITIONISTIC FUZZY LINGUISTIC QUANTIFIERS BASED ON INTUITIONISTIC FUZZY-VALUED FUZZY MEASURES AND INTEGRALS.
- Author
-
LICONG CUI, YONGMING LI, and XIAOHONG ZHANG
- Subjects
ALGORITHMS ,FUZZY sets ,SET theory ,MACHINE theory ,INTELLIGENT agents ,QUANTIFIERS (Linguistics) ,FUZZY algorithms - Abstract
In this paper, we generalize Ying's model of linguistic quantifiers [M.S. Ying, Linguistic quantifiers modeled by Sugeno integrals, Artificial Intelligence, 170 (2006) 581-606] to intuitionistic linguistic quantifiers. An intuitionistic linguistic quantifier is represented by a family of intuitionistic fuzzy-valued fuzzy measures and the intuitionistic truth value (the degrees of satisfaction and non-satisfaction) of a quantified proposition is calculated by using intuitionistic fuzzy-valued fuzzy integral. Description of a quantifier by intuitionistic fuzzy-valued fuzzy measures allows us to take into account differences in understanding the meaning of the quantifier by different persons. If the intuitionistic fuzzy linguistic quantifiers are taken to be linguistic fuzzy quantifiers, then our model reduces to Ying's model. Some excellent logical properties of intuitionistic linguistic quantifiers are obtained including a prenex norm form theorem. A simple example is presented to illustrate the use of intuitionistic linguistic quantifiers. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
31. CHARACTERIZING TREES IN CONCEPT LATTICES.
- Author
-
BĚLOHLÁVEK, RADIM, DE BAETS, BERNARD, OUTRATA, JAN, and VYCHODIL, VILEM
- Subjects
LATTICE theory ,DOCUMENT clustering ,FUZZY systems ,ALGORITHMS ,CATEGORIES (Mathematics) ,DATA - Abstract
Concept lattices are systems of conceptual clusters, called formal concepts, which are partially ordered by the subconcept/superconcept relationship. Concept lattices are basic structures used in formal concept analysis. In general, a concept lattice may contain overlapping clusters and need not be a tree. On the other hand, tree-like classification schemes are appealing and are produced by several clustering methods. In this paper, we present necessary and sufficient conditions on input data for the output concept lattice to form a tree after one removes its least element. We present these conditions for input data with yes/no attributes as well as for input data with fuzzy attributes. In addition, we show how Lindig's algorithm for computing concept lattices gets simplified when applied to input data for which the associated concept lattice is a tree after removing the least element. The paper also contains illustrative examples. [ABSTRACT FROM AUTHOR]
- Published
- 2008
32. AN AXIOMATIC DEFINITION OF FUZZY DIVERGENCE MEASURES.
- Author
-
COUSO, INÉS and MONTES, SUSANA
- Subjects
ARTIFICIAL intelligence ,ALGORITHMS ,STOCHASTIC processes ,FUZZY systems ,UNCERTAINTY (Information theory) - Abstract
The representation of the degree of difference between two fuzzy subsets by means of a real number has been proposed in previous papers, and it seems to be useful in some situations. However, the requirement of assigning a precise number may lead us to the loss of essential information about this difference. Thus, (crisp) divergence measures studied in previous papers may not distinguish whether the differences between two fuzzy subsets are in low or high membership degrees. In this paper we propose a way of measuring these differences by means of a fuzzy valued function which we will call fuzzy divergence measure. We formulate a list of natural axioms that these measures should satisfy. We derive additional properties from these axioms, some of them are related to the properties required to crisp divergence measures. We finish the paper by establishing a one-to-one correspondence between families of crisp and fuzzy divergence measures. This result provides us with a method to build a fuzzy divergence measure from a crisp valued one. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
33. ESTIMATION OF WEIBULL PARAMETERS USING A FUZZY LEAST-SQUARES METHOD.
- Author
-
Wen-Liang Hung and Yuan-Chen Liu
- Subjects
- *
NOISE , *WEIBULL distribution , *ALGORITHMS , *LEAST squares , *ESTIMATION theory , *STOCHASTIC processes , *DISTRIBUTION (Probability theory) , *PROBABILITY theory - Abstract
The purpose of this paper is to find a robust estimation method for a two-parameter Weibull distribution when outliers are present. This is a relevant problem because of the usefulness of the Weibull distribution in life testing and reliability theory. For that purpose, a cluster-wise fuzzy least-squares algorithm with a noise cluster is used. This is because a noise cluster can be used for compensating the effects of outliers. Numerical comparisons between this fuzzy least-squares algorithm and the existing methods are implemented. According to these comparisons, it is suggested that the proposed fuzzy least-squares algorithm is preferable when the sample size is large. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
34. EXPECTED VALUE OPERATOR OF RANDOM FUZZY VARIABLE AND RANDOM FUZZY EXPECTED VALUE MODELS.
- Author
-
Yian-Kui Liu and Baoding Liu
- Subjects
FUZZY sets ,RANDOM variables ,OPERATOR theory ,ALGORITHMS - Abstract
Random fuzzy variable is a mapping from a possibility space to a collection of random variables. This paper first presents a new definition of the expected value operator of a random fuzzy variable, and proves the linearity of the operator. Then, a random fuzzy simulation approach, which combines fuzzy simulation and random simulation, is designed to estimate the expected value of a random fuzzy variable. Based on the new expected value operator, three types of random fuzzy expected value models are presented to model decision systems where fuzziness and randomness appear simultaneously. In addition, random fuzzy simulation, neural networks and genetic algorithm are integrated to produce a hybrid intelligent algorithm for solving those random fuzzy expected valued models. Finally, three numerical examples are provided to illustrate the feasibility and the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
35. Supervised Ensemble Learning for Vietnamese Tokenization.
- Author
-
Liu, Wuying
- Subjects
VIETNAMESE people ,MONOSYLLABLES ,SYLLABICATION ,LANGUAGE & languages ,ALGORITHMS - Abstract
Vietnamese tokenization is a challenging basic issue, and the corresponding algorithms can be used in many applications of natural language processing. In this paper, we investigate the Vietnamese tokenization problem and propose a supervised ensemble learning (SEL) framework as well as a SEL-based tokenization (SELT) algorithm. Supported by the data structure of syllable-syllable frequency index, the SELT algorithm combines multiple weak tokenizers to form a strong tokenizer. Within the SEL framework, we also investigate the efficient construction problem of a weak tokenizer. We suggest two prediction methods to select a suitable dictionary, and efficiently implement two weak tokenizers by the simple dictionary-based tokenization algorithm. The experimental results show that the SELT algorithm integrating our weak tokenizers can achieve state-of-the-art performance in the Vietnamese tokenization task. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. INTERVAL METHODS IN KNOWLEDGE REPRESENTATION.
- Author
-
Kreinovich, Vladik
- Subjects
ALGORITHMS ,EVOLUTIONARY computation ,TECHNICAL reports ,PARAMETER estimation ,NEWTON diagrams - Abstract
This section is maintained by Vladik Kreinovich. Please send your abstracts (or copies of papers that you want to see reviewed here) to vladik@utep.edu, or by regular mail to: V. Kreinovich, Department of Computer Science, University of Texas, El Paso, TX 79968, USA. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
37. A Set Covering-Based Diagnostic Expert System to Economic and Financial Applications.
- Author
-
Hu, Cheng-Kai, Liu, Fung-Bao, and Hu, Cheng-Feng
- Subjects
EXPERT systems ,FUZZY measure theory ,VECTOR analysis ,FUZZY relational equations ,ALGORITHMS ,PROBLEM solving - Abstract
This paper considers the identification of problems which generate anomalies at firms through the observed symptoms on the basis of fuzzy relations and Zadeh's compositional rule of inference. A procedure for determining the fuzzy cause vector of an economic and financial diagnosis problem is proposed, which consists of the design of fuzzy relational matrix and the resolution of a system of fuzzy relational equations. An efficient algorithm for solving fuzzy relational equations in terms of the associated set covering problem is introduced. It utilizes a back-tracking method to generate each minimal covering, where no duplicate or non-minimal coverings exist. A numerical example of firms' insolvency causes diagnosis is also included. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. A PSO-Based Fuzzy c-Regression Model Applied to Nonlinear Data Modeling.
- Author
-
Soltani, Moez and Chaari, Abdelkader
- Subjects
FUZZY clustering technique ,REGRESSION analysis ,DATA modeling ,LEAST squares ,ALGORITHMS - Abstract
This paper presents a new method for fuzzy c-regression models clustering algorithm. The main motivation for this work is to develop an identification procedure for nonlinear systems using weighted recursive least squares and particle swarm optimization. The fuzzy c-regression models algorithm is sensitive to initialization which leads to the convergence to a local minimum of the objective function. In order to overcome this problem, particle swarm optimization is employed to achieve global optimization of FCRM and to finally tune parameters of obtained fuzzy model. The weighted recursive least squares is used to identify the unknown parameters of the local linear model. Finally, validation results involving simulation of two examples have demonstrated the effectiveness and practicality of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. Some Algorithms for Group Decision Making with Intuitionistic Fuzzy Preference Information.
- Author
-
Liao, Huchang and Xu, Zeshui
- Subjects
GROUP decision making ,INTUITIONISTIC mathematics ,ALGORITHMS ,INFORMATION theory ,FUZZY sets - Abstract
Intuitionistic fuzzy preference relation has turned out to be a powerful structure in representing the decision makers' preference information especially when the decision makers are not able to express their preferences accurately due to the unquantifiable information, incomplete information, unobtainable information, partial ignorance, and so forth. The aim of this paper is to develop some techniques for group decision making with intuitionistic fuzzy preference information. Based on the multiplicative consistency of intuitionistic fuzzy preference relation, three algorithms are proposed for intuitionistic fuzzy group decision making. In the case that the decision makers act as separate individuals, the priority vector of each decision maker can be derived directly from the individual intuitionistic fuzzy preference relation, after which an overall priority vector is obtained by synthesizing those individual priorities together. As for the scenario that the decision makers act as one individual, two different algorithms based on the multiplicative consistency are proposed to deal with this case. The main idea of the former procedure is firstly constructing a social intuitionistic fuzzy preference relation, while that of the later is building a fractional programming model. Some practical examples are given to demonstrate the developed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
40. Self-Tuning Possibilistic c-Means Clustering Models.
- Author
-
Szilágyi, László, Lefkovits, Szidónia, and Szilágyi, Sándor M.
- Subjects
- *
PULSE-code modulation , *FUZZY algorithms , *ALGORITHMS , *PROTOTYPES , *NOISE , *RELAXATION for health , *HEAT storage devices - Abstract
The relaxation of the probabilistic constraint of the fuzzy c-means clustering model was proposed to provide robust algorithms that are insensitive to strong noise and outlier data. These goals were achieved by the possibilistic c-means (PCM) algorithm, but these advantages came together with a sensitivity to cluster prototype initialization. According to the original recommendations, the probabilistic fuzzy c-means (FCM) algorithm should be applied to establish the cluster initialization and possibilistic penalty terms for PCM. However, when FCM fails to provide valid cluster prototypes due to the presence of noise, PCM has no chance to recover and produce a fine partition. This paper proposes a two-stage c-means clustering algorithm to tackle with most problems enumerated above. In the first stage called initialization, FCM with two modifications is performed: (1) extra cluster added for noisy data; (2) extra variable and constraint added to handle clusters of various diameters. In the second stage, a modified PCM algorithm is carried out, which also contains the cluster width tuning mechanism based on which it adaptively updates the possibilistic penalty terms. The proposed algorithm has less parameters than PCM when the number of clusters is c > 2. Numerical evaluation involving synthetic and standard test data sets proved the advantages of the proposed clustering model. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. FUZZY ONTOLOGY ALIGNMENT USING BACKGROUND KNOWLEDGE.
- Author
-
TODOROV, KONSTANTIN, HUDELOT, CELINE, POPESCU, ADRIAN, and GEIBEL, PETER
- Subjects
FUZZY sets ,ONTOLOGY ,INFORMATION retrieval ,DECISION making ,ALGORITHMS - Abstract
We propose an ontology alignment framework with two core features: the use of background knowledge and the ability to handle vagueness in the matching process and the resulting concept alignments. The procedure is based on the use of a generic reference vocabulary, which is used for fuzzifying the ontologies to be matched. The choice of this vocabulary is problem-dependent in general, although Wikipedia represents a general-purpose source of knowledge that can be used in many cases, and even allows cross language matchings. In the first step of our approach, each domain concept is represented as a fuzzy set of reference concepts. In the next step, the fuzzified domain concepts are matched to one another, resulting in fuzzy descriptions of the matches of the original concepts. Based on these concept matches, we propose an algorithm that produces a merged fuzzy ontology that captures what is common to the source ontologies. The paper describes experiments in the domain of multimedia by using ontologies containing tagged images, as well as an evaluation of the approach in an information retrieval setting. The undertaken fuzzy approach has been compared to a classical crisp alignment by the help of a ground truth that was created based on human judgment. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM.
- Author
-
NI, YAODONG and LIU, ZHI-QIANG
- Subjects
MARKOV processes ,ALGORITHMS ,PARAMETER estimation ,ITERATIVE methods (Mathematics) ,COMPUTATIONAL complexity ,UNCERTAINTY ,EMPIRICAL research - Abstract
Partially observable Markov decision processes (POMDPs) are powerful for planning under uncertainty. However, it is usually impractical to employ a POMDP with exact parameters to model the real-life situation precisely, due to various reasons such as limited data for learning the model, inability of exact POMDPs to model dynamic situations, etc. In this paper, assuming that the parameters of POMDPs are imprecise but bounded, we formulate the framework of bounded-parameter partially observable Markov decision processes (BPOMDPs). A modified value iteration is proposed as a basic strategy for tackling parameter imprecision in BPOMDPs. In addition, we design the UL-based value iteration algorithm, in which each value backup is based on two sets of vectors called U-set and L-set. We propose four strategies for computing U-set and L-set. We analyze theoretically the computational complexity and the reward loss of the algorithm. The effectiveness and robustness of the algorithm are shown empirically. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
43. HOW TO RANDOMLY GENERATE MASS FUNCTIONS.
- Author
-
BURGER, THOMAS and DESTERCKE, SÉBASTIEN
- Subjects
DEMPSTER-Shafer theory ,ALGEBRAIC field theory ,ALGORITHMS ,SIMULATION methods & models ,MATHEMATICAL analysis ,NUMERICAL analysis ,STATISTICAL sampling - Abstract
As Dempster-Shafer theory spreads in different application fields, and as mass functions are involved in more and more complex systems, the need for algorithms randomly generating mass functions arises. Such algorithms can be used, for instance, to evaluate some statistical properties or to simulate the uncertainty in some systems (e.g., data base content, training sets). As such random generation is often perceived as secondary, most of the proposed algorithms use straightforward procedures whose sample statistical properties can be difficult to characterize. Thus, although such algorithms produce randomly generated mass functions, they do not always produce what could be expected from them (for example, uniform sampling in the set of all possible mass functions). In this paper, we briefly review some well-known algorithms, explaining why their statistical properties are hard to characterize. We then provide relatively simple algorithms and procedures to perform efficient random generation of mass functions whose sampling properties are controlled. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
44. MEAN-SEMIVARIANCE MODELS FOR PORTFOLIO OPTIMIZATION PROBLEM WITH MIXED UNCERTAINTY OF FUZZINESS AND RANDOMNESS.
- Author
-
ZHONGFENG QIN, DAVID Z. W. WANG, and XIANG LI
- Subjects
ARITHMETIC mean ,VARIANCES ,INVESTMENTS ,FUZZY systems ,EXPECTED returns ,MATHEMATICAL proofs ,ALGORITHMS - Abstract
In practice, security returns cannot be accurately predicted due to lack of historical data. Therefore, statistical methods and experts' experience are always integrated to estimate future security returns, which are hereinafter regarded as random fuzzy variables. Random fuzzy variable is a powerful tool to deal with the portfolio optimization problem including stochastic parameters with ambiguous expected returns. In this paper, we first define the semivariance of random fuzzy variable and prove its several properties. By considering the semivariance as a risk measure, we establish the mean-semivariance models for portfolio optimization problem with random fuzzy returns. We design a hybrid algorithm with random fuzzy simulation to solve the proposed models in general cases. Finally, we present a numerical example and compare the results to illustrate the mean-semivariance model and the effectiveness of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
45. AN OVERVIEW OF p-SENSITIVE k-ANONYMITY MODELS FOR MICRODATA ANONYMIZATION.
- Author
-
TRUTA, TRAIAN MARIUS, CAMPAN, ALINA, and XIAOXUN SUN
- Subjects
- *
DATA protection , *ANONYMITY , *ALGORITHMS , *PRIVACY , *COMPUTER security , *DATABASE design - Abstract
In this paper, we present an overview of p-sensitive p-anonymity models including the basic model, the extended p-sensitive k-anonymity, the constrained p-sensitive p-anonymity, and the (p*,a)-sensitive k-anonymity. Existing properties of these models are reviewed and illustrated, and new properties regarding the maximum number of Ql-clusters are discussed and proved. This paper includes a review of related anonymity models and a very brief summary of existing algorithms for the family of p-sensitive k-anonymity models. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. INDUCING FUZZY REGRESSION TREE FORESTS USING ARTIFICIAL IMMUNE SYSTEMS.
- Author
-
GASIR, FATHI, CROCKETT, KEELEY, and BANDAR, ZUHAIR
- Subjects
FUZZY logic ,TREE graphs ,REGRESSION analysis ,BIOLOGICAL networks ,IMMUNE system ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
Fuzzy decision forests aim to improve the predictive power of single fuzzy decision trees by allowing multiple views of the same domain to be modelled. Such forests have been successfully created for classification problems where the outcome field is discrete; however predicting a continuous output value is more challenging in combining the output from multiple fuzzy decision trees. This paper presents a new approach to creating fuzzy regression tree forests based upon the induction of multiple fuzzy regression decision trees from one training sample, where each tree will represent a different view of the data domain. The singular fuzzy regression trees are induced using a proven algorithm known as Elgasir which fuzzifies crisp CHAID decision trees using trapezoidal membership functions for fuzzification and applies Takagi-Sugeno inference to obtain the final predicted values. A modified version of Artificial Immune System Network model (opt-aiNet) is then used for the simultaneous optimization of the membership functions across all trees within the forest. A strength of the proposed method is that data does not require fuzzification before forest induction this reducing pre-processing time and the need for subjective human experts. Five problem sets from the UCI repository and KEEL repository are used to evaluate the approach. The experimental results have shown that fuzzy regression tree forests reduce the error rate compared with single fuzzy regression trees. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
47. SIMILARITY-BASED RELATIONS IN DATALOG PROGRAMS.
- Author
-
HAJDINJAK, MELITA and BAUER, ANDREJ
- Subjects
DATA loggers ,DATABASES ,INFORMATION retrieval ,ALGORITHMS ,RELATION algebras ,QUERY (Information retrieval system) - Abstract
We consider similarity-based relational databases that allow to retrieve approximate data, find data within a given range of distance or similarity, and support imprecise queries. We focus on the recently introduced relational algebra with similarities on -relations, which are annotated with multi-dimensional similarity values with each dimension referring to a single attribute. The codomains of the annotated relations are De Morgan frames, and the annotations express the relevance of the tuples as answers to a similarity-based query. In this paper, we study Datalog programs on -relations, with and without negation. We describe the least-fixpoint algorithm for safe and rectified Datalog programs on -relations with finite support but without negative literals in the body. We further describe the perfect-minimal-fixpoint algorithm of a Datalog program on -relations with finite support and negative literals in the body when rules are safe, rectified and stratified. We introduce the idea of controlling the calculation of the annotations such that the tuples that enter an IDB relation last will be announced less desirable than those that enter first. For this we define a damping function that augments/diminishes the individual annotations that contribute to the final annotations of tuples. With a damping function, for instance, long chains of inferences may be made significantly less desirable or even totally undesirable. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
48. BAYESIAN NETWORK REVISION WITH PROBABILISTIC CONSTRAINTS.
- Author
-
PENG, YUN, DING, ZHONGLI, ZHANG, SHENYONG, and PAN, RONG
- Subjects
BAYESIAN analysis ,INTEGRATION (Theory of knowledge) ,PROBLEM solving ,ALGORITHMS ,DISTRIBUTION (Probability theory) ,GLOBAL analysis (Mathematics) ,MATHEMATICAL decomposition ,ITERATIVE methods (Mathematics) - Abstract
This paper deals with an important probabilistic knowledge integration problem: revising a Bayesian network (BN) to satisfy a set of probability constraints representing new or more specific knowledge. We propose to solve this problem by adopting IPFP (iterative proportional fitting procedure) to BN. The resulting algorithm E-IPFP integrates the constraints by only changing the conditional probability tables (CPT) of the given BN while preserving the network structure; and the probability distribution of the revised BN is as close as possible to that of the original BN. Two variations of E-IPFP are also proposed: 1) E-IPFP-SMOOTH which deals with the situation where the probabilistic constraints are inconsistent with each other or with the network structure of the given BN; and 2) D-IPFP which reduces the computational cost by decomposing a global E-IPFP into a set of smaller local E-IPFP problems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
49. ADAPTIVE FUZZY-BASED TRACKING CONTROL FOR A CLASS OF STRICT-FEEDBACK SISO NONLINEAR TIME-DELAY SYSTEMS WITHOUT BACKSTEPPING.
- Author
-
YOUSEF, HASAN A., HAMDY, MOHAMED, and SHAFIQ, MUHAMMAD
- Subjects
ADAPTIVE fuzzy control ,TRACKING control systems ,FEEDBACK control systems ,NONLINEAR theories ,TIME delay systems ,UNCERTAINTY (Information theory) ,ALGORITHMS ,COMPUTER simulation ,SET theory - Abstract
In this paper, an adaptive fuzzy tracking control is presented for a class of SISO nonlinear strict feedback systems with unknown time delays. The proposed algorithm does not use the backstepping scheme rather it converts the strict-feedback time-delayed system to the normal form. The Mamdani-type fuzzy system is employed to approximate on-line the lumped uncertain system nonlinearity. The developed controller guarantees uniform ultimate boundedness of all signals in the closed-loop system. The designed control law is independent of the time delays and has a simple form with only one adaptive parameter vector needed to be updated on-line. As a result, the proposed control algorithm is considerably simpler than the previous ones based on backstepping. The Lyapunov stability of the fuzzy system parameters and the filtered tracking error is employed to guarantee semiglobal uniform boundedness for the closed loop system. Simulation results are presented to verify the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
50. FUZZY DISTANCE SENSOR DATA INTEGRATION AND INTERPRETATION.
- Author
-
FALOMIR, ZOE, CASTELLÓ, VICENT, ESCRIG, M. TERESA, and PERIS, JUAN CARLOS
- Subjects
FUZZY sets ,DATA integration ,DETECTORS ,ROBOTICS ,ALGORITHMS ,QUALITATIVE reasoning ,ROBUST control ,PATTERN recognition systems - Abstract
An approach to distance sensor data integration that obtains a robust interpretation of the robot environment is presented in this paper. This approach consists in obtaining patterns of fuzzy distance zones from sensor readings; comparing these patterns in order to detect non-working sensors; and integrating the patterns obtained by each kind of sensor in order to obtain a final pattern that detects obstacles of any sort. A dissimilarity measure between fuzzy sets has been defined and applied to this approach. Moreover, an algorithm to classify orientation reference systems (built by corners detected in the robot world) as open or closed is also presented. The final pattern of fuzzy distances, resulting from the integration process, is used to extract the important reference systems when a glass wall is included in the robot environment. Finally, our approach has been tested in an ActivMedia Pioneer 2 dx mobile robot using the Player/Stage as the control interface and promising results have been obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.