1,550 results
Search Results
152. Using Artificial Intelligence for Creating and Managing Organizational Knowledge
- Author
-
Matija Kovačić, Maja Mutavdžija, Krešimir Buntak, and Igor Pus
- Subjects
artificial intelligence ,data mining ,digital transformation ,knowledge management ,organizational knowledge ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
With changes in organizational environment organizations must adopt their business model to the new conditions that are arising. By adapting to the new conditions, organization create knowledge. The main aim of the paper is to show the possibilities of using AI in managing and creating organizational knowledge within the organization and using once created knowledge for competitive advantage. This paper presents the results of the conducted secondary research on the application of artificial intelligence in knowledge creation, and based on the conducted research, a framework for knowledge creation was proposed. This framework starts with collecting data from different sensors on devices or machines and from employees. Gathering large amount of data then creates Big Data databases from which through data mining knowledge is created. In further research, the proposed framework will be used to conduct primary research on the impact of artificial intelligence on creating knowledge and managing it.
- Published
- 2022
- Full Text
- View/download PDF
153. Data Acquisition Model for Analyzing Cost Overrun in Construction Projects using KDD.
- Author
-
Ghazal, Mai Monir and Hammad, Ahmed Mohamed
- Subjects
DATA acquisition systems ,COST overruns ,CONSTRUCTION projects ,DATA mining ,DATA warehousing - Abstract
Projects are considered successful when completed on time as per baseline schedule and within allocated target budget. Cost overrun is a worldwide challenge to successful completion of construction projects. To overcome this problem, earlier studies were conducted to investigate the main causes of cost overrun. Knowledge Discovery in Data (KDD) and data mining techniques have been implemented successfully in other research areas to extract new and useful knowledge from historical data. These techniques can be also applied to projects' historical data if this data is captured in an organized and consistent manner. First section of this paper applies a comprehensive literature review on previous research to detect the major factors causing cost overrun. This analysis resulted in selecting twelve major factors that can be easily measured and analyzed at construction projects. After that, a data acquisition model is developed to capture the relevant historical data and metadata from completed construction projects in a reliable data warehouse. The developed data warehouse would enable the implementation of KDD and data mining techniques to tackle cost overrun problem. [ABSTRACT FROM AUTHOR]
- Published
- 2018
154. A Knowledge Management System for Analysis of Organisational Log Files.
- Author
-
Teixeira, Carlos, de Vasconcelos, José Braga, and Pestana, Gabriel
- Subjects
KNOWLEDGE management ,APPLICATION software ,DATA mining ,ESCALATION of commitment ,COMPUTER network architectures - Abstract
This paper presents a research approach for the analysis of organisational log files. The purpose of log file analysis is to provide information about how the client uses an app (software application) and to detect anomalies that are not perceive by the user so those anomalies can be corrected before the escalation of a problem. The outcome of this research aims to define an architecture to find software flows and apply datamining techniques to measure and obtain knowledge to be recorded in KMS. [ABSTRACT FROM AUTHOR]
- Published
- 2018
155. 多模态公文的结构知识抽取与组织研究.
- Author
-
徐瑞麟, 耿伯英, and 刘树絗
- Subjects
KNOWLEDGE management ,KNOWLEDGE graphs ,ORGANIZATION management ,DATA mining ,LOGIC - Abstract
Copyright of Systems Engineering & Electronics is the property of Journal of Systems Engineering & Electronics Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
156. Improving the Cognitive Agent Intelligence by Deep Knowledge Classification.
- Author
-
Chemchem, Amine, Alin, François, and Krajecki, Michael
- Subjects
INTELLIGENT agents ,COMPUTATIONAL intelligence ,DEEP learning ,ARTIFICIAL neural networks ,DATA mining ,KNOWLEDGE management - Abstract
In this paper, a new idea is developed for improving the agent intelligence. In fact with the presented convolutional neural network (CNN) approach for knowledge classification, the agent will be able to manage its knowledge. This new concept allows the agent to select only the actionable rule class, instead of trying to infer its whole rule base exhaustively. In addition, through this research, we developed a comparative study between the proposed CNN approach and the classical classification approaches. As foreseeable the deep learning method outperforms the others in term of classification accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
157. A NEW TYPOLOGY DESIGN OF PERFORMANCE METRICS TO MEASURE ERRORS IN MACHINE LEARNING REGRESSION ALGORITHMS.
- Author
-
Botchkarev, Alexei
- Subjects
- *
MACHINE learning , *WEBOMETRICS , *CLASSIFICATION , *KNOWLEDGE management , *DATA mining , *PROFESSIONAL peer review - Abstract
Aim/Purpose The aim of this study was to analyze various performance metrics and approaches to their classification. The main goal of the study was to develop a new typology that will help to advance knowledge of metrics and facilitate their use in machine learning regression algorithms Background Performance metrics (error measures) are vital components of the evaluation frameworks in various fields. A performance metric can be defined as a logical and mathematical construct designed to measure how close are the actual results from what has been expected or predicted. A vast variety of performance metrics have been described in academic literature. The most commonly mentioned metrics in research studies are Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), etc. Knowledge about metrics properties needs to be systematized to simplify the design and use of the metrics. Methodology A qualitative study was conducted to achieve the objectives of identifying related peer-reviewed research studies, literature reviews, critical thinking and inductive reasoning. Contribution The main contribution of this paper is in ordering knowledge of performance metrics and enhancing understanding of their structure and properties by proposing a new typology, generic primary metrics mathematical formula and a visualization chart Findings Based on the analysis of the structure of numerous performance metrics, we proposed a framework of metrics which includes four (4) categories: primary metrics, extended metrics, composite metrics, and hybrid sets of metrics. The paper identified three (3) key components (dimensions) that determine the structure and properties of primary metrics: method of determining point distance, method of normalization, method of aggregation of point distances over a data set. For each component, implementation options have been identified. The suggested new typology has been shown to cover a total of over 40 commonly used primary metrics Recommendations for Practitioners Presented findings can be used to facilitate teaching performance metrics to university students and expedite metrics selection and implementation processes for practitioners Recommendations for Researchers By using the proposed typology, researchers can streamline development of new metrics with predetermined properties Impact on Society The outcomes of this study could be used for improving evaluation results in machine learning regression, forecasting and prognostics with direct or indirect positive impacts on innovation and productivity in a societal sense Future Research Future research is needed to examine the properties of the extended metrics, composite metrics, and hybrid sets of metrics. Empirical study of the metrics is needed using R Studio or Azure Machine Learning Studio, to find associations between the properties of primary metrics and their "numerical" behavior in a wide spectrum of data characteristics and business or research requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
158. Knowledge Discovery in Case Studies: The Case Insight Method for Case-Based Problem Solving.
- Author
-
Bettoni, Marco
- Subjects
- *
DATA mining , *PROBLEM solving , *SYSTEMS theory , *QUALITATIVE research , *KNOWLEDGE management - Abstract
The topic of this paper is a new method of knowledge discovery in documents called "Case Insight" (abbreviated to CI). The research question that led to this development was "How can we discover knowledge through case studies and make it usable for case-based problem solving?" To answer this question, this research took a Systems Thinking and Networked Thinking qualitative approach. Case-based problem-solving uses knowledge contained in authentic case descriptions (i.e. "good practice" or even "best practice" cases) and adapts it to the requirements of a new problem. Who can use this? Managers and management consultants who are starting out in their careers can benefit in particular from the CI method as it allows them to expand their repertoire of experience in problem-solving on the basis of case studies, i.e. without being involved in projects. All those interested in solving complex management problems in a case-based way also form part of the target audience. Case studies contain a great deal of problem-solving knowledge but only part of that knowledge can be absorbed through simple reading. The rest remains difficult to access, a hidden treasure, so to speak. Why is that? The reason is that knowledge discovery in case studies is made more difficult due to two obstacles: firstly, the texts are not sufficiently brain-friendly and secondly, they are not designed holistically enough. The CI method makes it possible to overcome these obstacles by means of CI tools and CI models. Firstly, CI tools are used to analyse case studies by comparing concepts, ideas, etc. and combining them into a whole; secondly, CI models make knowledge discovered in this way usable in the form of brain-friendly and holistic knowledge structures. Thus, knowledge discovery through the CI method complies with Immanuel Kant's definition of knowledge as "a whole of compared and linked ideas". [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
159. An effective knowledge quality framework based on knowledge resources interdependencies.
- Author
-
Sabetzadeh, Farzad and Tsui, Eric
- Subjects
THEORY of knowledge ,DATA mining ,INFORMATION resources ,KNOWLEDGE management ,BULLETIN boards - Abstract
Purpose – The purpose of this paper is to introduce a new knowledge quality assessment framework based on interdependencies between content and schema as knowledge resources to enhance the quality of the knowledge that is being generated, disseminated and stored in a collaborative environment. Design/methodology/approach – A knowledge elaboration approach is based on intervening factors of schematic clustering applied to a trial wiki bulletin board. Through this schematic intervention in the form of group creation within a wiki environment, a user-centric mechanism is created to substantiate, compose and narrate the generated contents in a self-organizing way. Findings – Through this approach, quality in content can be enhanced by means of a favourably manipulated collaboration schema adopted by the knowledge management system (KMS) users instead of applying knowledge mining tools. Research limitations/implications – With consideration to trust as a significant factor in this study, the verification and referral process may vary for KMS structures that are of larger scale or in low-trust collaborative environments. Originality/value – This study demonstrates transition to higher quality knowledge with less time spent on the original content refinement and composition by paying due consideration to the interdependencies between knowledge resource content and its schema. Validation is done via a clustered group structure in a specially designed wiki which had been used as a discussion bulletin board on directed topics over an extended period. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
160. Mining Associated Patterns from Wireless Sensor Networks.
- Author
-
Rashid, Md. Mamunur, Gondal, Iqbal, and Kamruzzaman, Joarder
- Subjects
DATA mining ,WIRELESS sensor networks ,PATTERN recognition systems ,KNOWLEDGE management ,STATISTICAL correlation - Abstract
Mining of sensor data for useful knowledge extraction is a very challenging task. Existing works generate sensor association rules using occurrence frequency of patterns to extract the knowledge. These techniques often generate huge number of rules, most of which are non-informative or fail to reflect true correlation among sensor data. In this paper, we propose a new type of behavioral pattern called associated sensor patterns which capture association-like co-occurrences as well as temporal correlations which are linked with such co-occurrences. To capture such patterns a compact tree structure, called associated sensor pattern tree (ASP-tree) and a mining algorithm (ASP) are proposed which use pattern growth-based approach to generate all associated patterns with only one scan over dataset. Moreover, when data stream flows through, old information may lose significance for the current time. To capture significance of recent data, ASP-tree is further enhanced to SWASP-tree by adopting sliding observation window and updating the tree structure accordingly. Finally, window size is made dynamically adaptive to ensure efficient resource usage. Different characteristics of the proposed techniques and their computational complexity are presented. Experimental results show that our approach is very efficient in discovering associated sensor patterns and outperforms existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
161. How to measure similarity for multiple categorical data sets?
- Author
-
Park, Simon, Song, Justin, Lee, James, Lee, Wookey, and Ree, Sangbok
- Subjects
DATA distribution ,DATA mining ,CATEGORIES (Mathematics) ,KNOWLEDGE management ,MEASUREMENT of distances - Abstract
How to measure similarity or distance for multiple categorical data? It is an important step for Data Mining and Knowledge Management process to measure similarity or distance between objects appropriately. Measurements for continuous data have been well-defined and relatively easy to be calculated. However, the notion of similarity for categorical data is not simple, since categorical data usually is not simply translated into the numerical format, and they also have their own priority with structures and data distribution. In this paper, we propose a new measure for multiple categorical data sets using data distribution. Our new measure, MCSM (Multiple Categorical Similarity Measure), can solve conventional drawbacks of multiple categorical data sets successfully in which we prove the verification of our measure with mathematical proofs and experimentation. The experimental result shows that our measure is powerful for multiple categorical data sets with proper data distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
162. Collaborative Networks - Premises for Exploitation of Inter-Organizational Knowledge Management.
- Author
-
MIRCEA, Marinela
- Subjects
KNOWLEDGE management ,DATA mining ,DATA collection platforms ,HUMAN factors in management information systems ,COMPUTER storage capacity - Abstract
Inter-organization knowledge management in the context of collaborative networks is a critical activity for business success. During the evolution of collaborative network specific technologies, increasingly performant instruments were created to exploit this knowledge. As a development cycle, inter-organizational knowledge is built on the foundation of information and data owned by the participants in the collaborative networks. One of the most widely used instruments to exploit this data and knowledge, with the purpose of creating new knowledge, is Data Mining. In the context of this paper, data mining is the process of discovering patterns and hidden relations in very large data collections, stored in data banks or data bases. Because only in extremely rare cases reading data tables record by record leads to the discovery of useful patterns, the information must be processed automatically, process known as Knowledge Discovery. Knowledge Discovery is a component that combines the power of computers with a human operator that has the ability tofind the visual patterns revealed by the system. Using an automated data mining system, the computer finds the existing informational patterns and the human factor (the analyst) evaluates those patterns and picks the ones that are really relevant for the current analysis. Considering the current technological context, where storage devices are more and more accessible and performant, the storage capacity is no longer a barrier preventing storage of all required data. Exploitation of inter-organizational knowledge in collaborative networks leads the research to the field of business intelligence applied even on social environment. This approach belongs in literature to the general branch of social business intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
163. FRAMEWORK FOR EVALUATING THE EFFECTIVENESS OF DATA MINING IN THE ANALYSIS OF DYNAMIC DATA FOR SUPPORTING FINANCIAL DECISION-MAKING.
- Author
-
Taleb, Nasser and Mohamed, Elfadil A.
- Subjects
DATA mining ,FINANCIAL planning ,FINANCIAL management ,KNOWLEDGE management ,ECONOMIC aspects of decision making - Abstract
Most of the literature today defines data mining as the process of discovering hidden patterns from large amount of data. The discovery of such patterns involves solving problems by analyzing data already present in large amount. The discovered patterns must be meaningful in that they lead to some advantages, usually economic advantages. Data mining has been widely used in several fields. Among these fields is the financial analysis. This paper presents a framework for evaluating the effectiveness of data mining in the analysis of dynamic data for supporting financial decision. This framework suggests the integration of data mining tool in the decision support systems, an interactive computer-based for supporting decision making. DSS is regarded as information system tool for solving problems of semi-structured or unstructured nature. [ABSTRACT FROM AUTHOR]
- Published
- 2015
164. Knowledge Sharing Motivation Among External and Internal IT Workers.
- Author
-
Koriat, Noam and Gelbard, Roy
- Subjects
KNOWLEDGE management ,EMPLOYEES ,INSOURCING ,INFORMATION technology outsourcing ,DATA mining ,COLLECTIVE action - Abstract
This paper extends a previous study that proposed an integrated model to test knowledge sharing (KS) motivation among information technology (IT) workers. While the previous study focussed on the differences in KS between internal and external IT workers, the perspective of the current paper is broader; it proposes additional hypotheses, and uses both inferential statistics and data mining techniques to detect further practical aspects of the integrated model findings. Because data mining techniques are useful in extracting patterns and gaining insights from data, they are implemented here alongside inferential statistics. The present study also looks into the employment-contract factor, to better capture the differences between internal and external IT workers. The study reveals that external workers score significantly lower than internal workers in almost every component of the integrated KS model. This gives rise to five practical implications of knowledge management (KM) and employment policies, including factors and practises that should be taken into consideration while employing external workers, to help motivate collaborative behaviour in IT departments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
165. Towards Extended Data Mining: An Examination of Technical Aspects.
- Author
-
Kaspa, Lakshmi Prasanna, Akella, Venkata Naga Sai Sriram, Chen, Zhengxin, and Shi, Yong
- Subjects
DATA mining ,KNOWLEDGE management ,INFORMATION services management ,BIG data ,DATA science - Abstract
Abstract Data mining has been an active research area for a couple of decades, yet the complicated nature of data mining is still not fully understood. One common misunderstanding of data mining is: Give me the data set, and data mining tools will show me the hidden knowledge. However, this thinking is quite naive, and is not realistic in many real world applications. In this paper, we explore extended data mining, which has the ultimate goal of automatically collecting additional data when needed for effective data mining. Existing web crawling and scraping techniques can be incorporated, but additional steps are still needed. In this paper, we examine important technical aspects for extended data mining via web crawling and scraping. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
166. Identifying Factors of Customer Satisfaction Using Reviews and Text Mining.
- Author
-
Suzuki, Takayuki, Gemba, Kiminori, and Aoyama, Atsushi
- Subjects
CUSTOMER satisfaction ,TEXT mining ,INNOVATION management ,INNOVATIONS in business ,KNOWLEDGE management ,CREATIVE ability in business ,DATA mining - Abstract
In recent years, various methods have been developed that enabled enterprises and organizations to collect information on customer sentiments, perceptions, and demand. However, such methods do not provide practical guidance for how enterprises and organizations can analyze and use such information in order to offer better products and services to their customers. This research proposes a new method for identifying the strengths and weaknesses of products or services using review sites and natural language processing software. In this research, we used an online review site, Skytrax, to collect user reviews on the economy class flights of four airlines. We then analyzed the data we collected from the reviews to identify the strengths and weaknesses of each airline. The results of the analysis can help identify and reconcile discrepancies between customer expectations and perceptions of products or services [ABSTRACT FROM AUTHOR]
- Published
- 2012
167. A Hybrid Approach to Decision Support Environment: Onto-DM-DSS Model
- Author
-
Mishra, Aastha, Yadav, Amit, Singh, Preetvanti, Lim, Meng-Hiot, Series Editor, Ong, Yew Soon, Series Editor, Pandit, Manjaree, editor, Srivastava, Laxmi, editor, Venkata Rao, Ravipudi, editor, and Bansal, Jagdish Chand, editor
- Published
- 2020
- Full Text
- View/download PDF
168. A review of data mining in knowledge management: applications/findings for transportation of small and medium enterprises
- Author
-
Mohd Selamat, Siti Aishah, Prakoonwit, Simant, and Khan, Wajid
- Published
- 2020
- Full Text
- View/download PDF
169. The study on influencing factors of Regional Incubators based on knowledge management.
- Author
-
Wang, Xiao-Feng and Zhou, Pang
- Abstract
This paper presents a study of regional effect on incubator function of innovation cluster from the perspective of knowledge management. First, the concept of Regional Incubator Function is defined. The reasons of both the birth and the development of the Regional Incubator and the effect of promotion of knowledge spillover are analyzed. Second, factors of the influence of the knowledge spillover among corporations within the region are analyzed. A model of knowledge spillover is built using data mining techniques. Finally, recommendations for policy control in the process of realizing the Regional Incubator are given. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
170. Integration of data quality component in an ontology based knowledge management approach for e-learning system.
- Author
-
Sangodiah, Anbuselvan and Lim Ean Heng
- Abstract
At present, with the growing demand of knowledge economy, fast popularization and development of the World Wide Web, a lot of people are turning to e-learning via web. There are plenty e-learning systems nowadays and these systems provide some simple tools to manage and search a lot of teaching materials. Contrary to traditional teaching materials base which neglects to integrate semantic and concept teaching knowledge, the integration of ontology technology and knowledge management into e-learning has enabled the capturing of information and reusing it in an effective manner. Users now can easily and conveniently organize, share and reuse knowledge during e-learning. Despite a lot of research work has been revolved around ontology and knowledge management based technology in e-learning, contextual data quality issues inherent in online e-learning forums have yet to be fully explored and addressing the issues is important as to ensure only quality data is stored consistently in knowledge base in e-learning environment. This paper will highlight contextual data quality issues particularly in online e-learning forums and will propose a data quality component in e-learning environment in the context of knowledge base to address the issues. Also will be discussed are the technologies such as text mining, data mining and AI that are to incorporated into the component to accomplish its tasks. With the component in place, the contextual data quality issues can be resolved to a certain extent. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
171. A Review of Failure Handling Mechanisms for Data Quality Measures.
- Author
-
Emran, Nurul A., Abdullah, Noraswaliza, and Mustafa, Nuzaimah
- Subjects
DATA quality ,DATA mining ,CONTENT mining ,KNOWLEDGE management ,INFORMATION resources management - Abstract
Successful data quality (DQ) measure is important for many data consumers (or data guardians) to decide on the acceptability of data of concerned. Nevertheless, little is known about how "failures" of DQ measures can be handled by data guardians in the presence of factor(s) that contributes to the failures. This paper presents a review of failure handling mechanisms for DQ measures. The failure factors faced by existing DQ measures will be presented, together with the research gaps in respect to failure handling mechanisms in DQ frameworks. We propose ways to maximise the situations in which data quality scores can be produced when factors that would cause the failure of currently proposed scoring mechanisms are present. By understanding how failures can be handled, a systematic failure handling mechanism for robust DQ measures can be designed. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
172. Making Organizational Learning Work: Lessons From a High Reliability Organization.
- Author
-
Sullivan, John and Beach, Roger
- Subjects
ORGANIZATIONAL learning ,DATA mining ,KNOWLEDGE workers - Abstract
This paper reports findings from an ongoing study to understand the dynamics of operational reliability. Previously, the study identified weaknesses in organizational settings that inhibited learning opportunities, specifically the ability to learn from failure (Sullivan et al., 2008). Effective organizational learning strategies are critical in promoting operational reliability, particularly in recovering from operational failures or preventing them altogether (Sullivan, 2007). In the literature, there is considerable debate over the effectiveness of organizational learning. However, there is evidence that shows that it can, and in some cases must, work. The U.S. Navy demonstrates exceptional learning capabilities, learning from failure and even learning without failure. Further, the Navy's knowledge management practices have proven effective over time as generations of military personnel, civil servants, and contractors learn from the experiences of their predecessors (Sullivan, 2007). Findings arising from a six-month ethnographic study of an organization working in a high reliability environment supporting Navy operations are presented in this paper. Data gathered from this study concerning an operational failure are analyzed using the Sullivan-Beach Model (Sullivan and Beach, 2009); an explanatory framework that describes the dynamics of High Reliability Organizations (HROs), the factors that contribute to operational reliability and those that threaten to undermine the reliability of an organization's operations. From this analysis, the possible causes of operational failure were identified that lead to remedial actions being taken and dramatic improvements in operational reliability being achieved. Further valuable insights into why some organizations learn from failure when others do not arose from this work. [ABSTRACT FROM AUTHOR]
- Published
- 2011
173. Evaluation of Application Embedded Knowledge Migration Issues.
- Author
-
Cochran, Mitchell
- Subjects
KNOWLEDGE management ,INTELLECTUAL property ,COMPUTER software ,DATA mining ,END users (Information technology) ,INFORMATION technology ,VENDORS (Real property) - Abstract
As computing has matured, more organizations are purchasing best of breed applications as opposed to developing them in-house. From a Knowledge Management point of view, the organizations are renting the use of knowledge that is embedded in the applications. The organizations may own the data but the host application company owns the intellectual capital that creates the knowledge. For any of a number of reasons organizations will have to move to new applications and in turn new knowledge. It is assumed that the organization will be able to migrate current data and print reports but it does not own the original base knowledge. The issue is to understand what knowledge is imbedded in the old application and how can it be integrated into the new system. As the knowledge is inventoried, the new vendor can then determine if the knowledge will be available in the system. After that determination, the user might have to decide if the information is obsolete or possibly lost data. The migration issue also can put the onus of development on the end user. Consider the conversation of the developer and the end user where the end user asks for a feature that the developer has not seen. The end user is looking for features in the old system and the developer is going to say that it is up to the end user to tell them what they want. The issue is that the end user may now know what they want. The knowledge embedded in the code of the old application provided the information. The information basis is the intellectual property of the outgoing vendor and they may not have any reason to work with the incoming vendor. The paper will evaluate migration issues based on a case study of the migration of a financial application for a small city. The paper will also discuss some of the assumptions of knowledge management and a knowledge inventory to help an organization prepare to move applications to a new vendor. [ABSTRACT FROM AUTHOR]
- Published
- 2011
174. Learning From Experience: Can e-Learning Technology be Used as a Vehicle?
- Author
-
Thien Wan Au, Shazia Sadiq, and Xue Li
- Subjects
EXPERIENTIAL learning ,ONLINE education ,INTERNET in education ,DATA mining ,KNOWLEDGE management ,EDUCATIONAL innovations - Abstract
In the academic, corporate and consumer fields the adoption of eLearning is on the rise. Despite the popularity and huge investment on it, the result from eLearning is still regarded as not quite living up to its expectations and some major concerns in its effectiveness and appropriateness have been revealed in various studies. Many of the eLearning systems developed today were merely the automation of the process and management of teaching and delivering of courses with the advantages of eliminating the time and space barrier. The value towards better learning outcomes is still an area of study, although some researchers have recognized the issues and provided innovative solutions to solve some related problems. In eLearning especially with the absence of face-to-face contact with educators, lecturers, facilitators and tutors, capturing and utilizing experiences of learners as knowledge available or sharable to peers would be a critical catalyst in making learning more efficient and producing better outcomes. Sharing experience in organization has been a research topic in the 90's and used extensively in organization. Hence many organizations since the 90's benefited epistemologically and financially from sharing experience and knowledge. On the other hand sharing of learning experience (LE) in academic eLearning is not so common but has recently been catching attention and seen as an important asset in eLearning. The evaluation of results in eLearning with respect to the increase of learning effectiveness by knowledge sharing indicated improved learning effectiveness. Accordingly an eLearning system comprises three essential components: Human, knowledge and technology (HKT). The nature of learning process is a transfer process between tacit and explicit knowledge. The purpose of the paper is to redefine LE in the context of eLearning within the HKT-paradigm and to propose a structure of LE and a conceptual architecture of an LE Recommender System (LERS). The LERS architecture utilizes learners' profiles, outcomes and behavior in order to capture and store learners' experience. Essentially LE is conceptualized as events as a result of interactions, satisfying personalized needs and promoting use and reuse of sharing of personal or common knowledge. LE reuse in the form of knowledge implies the transformation of knowledge to action typically represented as the ability to solve problem and accelerate learning efficiency. Data mining technique will form part of the LERS in helping to optimize peers learning by recommending appropriate LEs to learners based on their behavior and profiles. The originality of the concept is the use of data mining to recommend LEs dynamically to targeted learners based on user profiles and user behavior to optimize the learning process, improve effectiveness and producing better outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2009
175. Using Text Mining Tools for Event Data Analysis.
- Author
-
Kacprzyk, Janusz, Sirmakessis, Spiros, and Stathopoulou, Theoni
- Subjects
DATA analysis ,DATA mining ,DATABASE searching ,DECISION support systems ,KNOWLEDGE management - Abstract
This paper concerns itself with the analysis of event data with text mining tools. The methodological approaches to event data analysis are presented, and an analysis is performed using SPAD Software and SAS Text Miner. Finally, some conclusions are drawn concerning the use of text mining tools for event data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
176. Dealing with Uncertainty in the Real-Time Knowledge Discovery Process.
- Author
-
Oosterom, Peter, Zlatanova, Siyka, Fendel, Elfriede M., Wachowicz, Monica, and Hunter, Gary J.
- Subjects
KNOWLEDGE management ,SEARCH engines ,DATABASE searching ,DECISION support systems ,DATA mining - Abstract
This paper will examine where uncertainty may lie in the knowledge discovery process through the use of case studies in disaster management, in turn leading to a discussion of what future action is required to address the uncertainty that may lie within knowledge obtained through these techniques. We describe our approach to address three types of issues: accuracy, efficiency, and usability. Typically, data mining techniques have higher false positive rates than traditional data exploratory methods, making them unusable in real-time systems. Also, these techniques tend to be inefficient (that is, computationally expensive) during the steps of a knowledge discovery process, particularly during training and evaluation. This prevents them from being able to process data and detect anomalies, hot-spots, or patterns in real-time applications. Finally, disaster management applications require large amount of training data and are significantly more complex than traditional GIS applications. These problems are inherent in developing and deploying any real-time data mining based system, and although there are trade-offs between these three groups of issues, each can generally be handled separately. The paper concludes by presenting the key design elements for supporting a real-time knowledge discovery process and group them into which general issues they address. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
177. 2015 WHITE PAPER CALENDAR BEST PRACTICES IN ...
- Subjects
KNOWLEDGE management ,BUSINESS process management ,DATA mining - Abstract
A calendar of events related to knowledge management in 2015 is presented including one about business process management (BPM) in January, another on data mining in March, and one regarding web content management in April.
- Published
- 2015
178. Categorization for grouping associative items using data mining in item-based collaborative filtering.
- Author
-
Chung, Kyung-Yong, Lee, Daesung, and Kim, Kuinam
- Subjects
DATA mining ,DECISION support systems ,DATABASE searching ,KNOWLEDGE management ,SEARCH engines - Abstract
Recommendation systems have been investigated and implemented in many ways. In particular, in the case of a collaborative filtering system, the most important issue is how to manipulate the personalized recommendation results for better user understandability and satisfaction. A collaborative filtering system predicts items of interest for users based on predictive relationships discovered between each item and others. This paper proposes a categorization for grouping associative items discovered by mining, for the purpose of improving the accuracy and performance of item-based collaborative filtering. It is possible that, if an associative item is required to be simultaneously associated with all other groups in which it occurs, the proposed method can collect associative items into relevant groups. In addition, the proposed method can result in improved predictive performance under circumstances of sparse data and cold-start initiation of collaborative filtering starting from a small number of items. In addition, this method can increase prediction accuracy and scalability because it removes the noise generated by ratings on items of dissimilar content or level of interest. The approach is empirically evaluated by comparison with k-means, average link, and robust, using the MovieLens dataset. The method was found to outperform existing methods significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
179. Data Mining on Romanian Stock Market Using Neural Networks for Price Prediction.
- Author
-
NEMES, Magdalena Daniela and BUTOI, Alexandru
- Subjects
DATA mining ,DATABASE marketing ,KNOWLEDGE management ,STOCK exchanges ,ARTIFICIAL neural networks ,PRICE regulation - Abstract
Predicting future prices by using time series forecasting models has become a relevant trading strategy for most stock market players. Intuition and speculation are no longer reliable as many new trading strategies based on artificial intelligence emerge. Data mining represents a good source of information, as it ensures data processing in a convenient manner. Neural networks are considered useful prediction models when designing forecasting strategies. In this paper we present a series of neural networks designed for stock exchange rates forecasting applied on three Romanian stocks traded on the Bucharest Stock Exchange (BSE). A multistep ahead strategy was used in order to predict short-time price fluctuations. Later, the findings of our study can be integrated with an intelligent multi-agent system model which uses data mining and data stream processing techniques for helping users in the decision making process of buying or selling stocks. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
180. GUEST EDITOR'S INTRODUCTION.
- Author
-
Chen, Yi-Ping Phoebe
- Subjects
DATA mining ,COMPUTERS in biology ,BIOINFORMATICS ,DATABASE searching ,INFORMATION storage & retrieval systems ,KNOWLEDGE management - Abstract
No abstract received. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
181. Semantic invoice processing.
- Author
-
Escobar-Vega, Luis M., Zaldívar-Carrillo, Víctor H., Villalon-Turrubiates, Ivan, Pinto, Singh, Villavicencio, Mayr-Schlegel, and Stamatatos
- Subjects
SEMANTIC networks (Information theory) ,DATA mining ,ONTOLOGIES (Information retrieval) ,KNOWLEDGE management ,SEMANTIC Web - Abstract
This work highlights how to transform information from invoice documents to semantic models, as an implementation of ontology modeling. The migration from printed paper to digital documents in the Mexican Government Offices in the last few years has brought significant opportunities for the usage of information technologies and applications. However, when changing digital document information into knowledge, there are still many gaps to be filled. This work proposes a solution to some issues regarding ontology modeling, specifically when mapping a document that follows some XML schema to an ontology under the OWL standard. The main contribution of this work is to provide new interpretations of the XML terms in the context of OWL, so that the XML Schema Definition (XSD) structures can be mapped into more complex OWL structures. A software tool developed to test and validate the information extraction strategies proposed is presented here. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
182. A fuzzy association rule-based knowledge management system for occupational safety and health programs in cold storage facilities.
- Author
-
Tsang, Y. P., Choy, K. L., Koo, P. S., Ho, G. T. S., Wu, C. H., Lam, H. Y., and Tang, Valerie
- Subjects
KNOWLEDGE management ,INDUSTRIAL safety management ,COLD storage ,STORAGE facilities ,DATA mining ,FUZZY sets ,MANAGEMENT - Abstract
Purpose This paper aims to improve operational efficiency and minimize accident frequency in cold storage facilities through adopting an effective occupational safety and health program. The hidden knowledge can be extracted from the warehousing operations to create the comfortable and safe workplace environment.Design/methodology/approach A fuzzy association rule-based knowledge management system is developed by integrating fuzzy association rule mining (FARM) and rule-based expert system (RES). FARM is used to extract hidden knowledge from real operations to establish the relationship between safety measurement, personal constitution and key performance index measurement. The extracted knowledge is then stored and adopted in the RES to establish an effective occupational and safety program. Afterwards, a case study is conducted to validate the performance of the proposed system.Findings The results indicate that the aforementioned relationship can be built in the form of IF-THEN rules. An appropriate safety and health program can be developed and applied to all workers, so that they can follow instructions to prevent cold induced injuries and also improve the productivity.Practical implications Because of the increasing public consciousness of occupational safety and health, it is important for the workers in cold storage facilities where the ambient temperature is at/below 10°C. The proposed system can address the social problem and promote the importance of occupational safety and health in the society.Originality/value This study contributes to the knowledge management system for improving the occupational safety and operational efficiency in the cold storage facilities. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
183. INFORMATION SYSTEMS BACKSOURCING: A LITERATURE REVIEW.
- Author
-
VON BARY, BENEDIKT and WESTNER, MARKUS
- Subjects
KNOWLEDGE management ,INFORMATION services ,INFORMATION resources management ,INFORMATION technology ,DATA mining - Abstract
Information systems backsourcing describes the transfer of previously outsourced activities, assets, or personnel back to the originating company to regain ownership and control. While there is much research on information systems outsourcing, the topic of backsourcing information systems is still an emerging research area. Therefore, our paper aims to explore and synthesize the existing literature on information systems backsourcing, since there is no exhaustive literature review of the state of the research to our knowledge available yet. In this paper, we create a framework to structure the existing research along the overall backsourcing process. We identify different motivators, such as expectation gaps, or internal and external organizational changes, leading towards a backsourcing decision, and factors positively or negatively influencing this decision. Additionally, we derive implementation success factors based on the existing literature to guide companies through the backsourcing process. We also differentiate the term backsourcing from related, sometimes synonymously used terms, by emphasizing the change of ownership back to the company of origin as the main criterion. Additionally, we discuss opportunities for future research in the field of information systems backsourcing. [ABSTRACT FROM AUTHOR]
- Published
- 2018
184. A binary PSO approach to mine high-utility itemsets.
- Author
-
Lin, Jerry, Yang, Lu, Fournier-Viger, Philippe, Hong, Tzung-Pei, and Voznak, Miroslav
- Subjects
DATA mining ,KNOWLEDGE management ,PARTICLE swarm optimization ,GENETIC algorithms ,MEDIA mining systems ,EVOLUTIONARY computation - Abstract
High-utility itemset mining (HUIM) is a critical issue in recent years since it can be used to reveal the profitable products by considering both the quantity and profit factors instead of frequent itemset mining (FIM) or association-rule mining (ARM). Several algorithms have been presented to mine high-utility itemsets (HUIs) and most of them have to handle the exponential search space for discovering HUIs when the number of distinct items and the size of database are very large. In the past, a heuristic HUPE $$ _\mathrm{umu}$$ -GRAM algorithm was proposed to mine HUIs based on genetic algorithm (GA). For the evolutionary computation (EC) techniques of particle swarm optimization (PSO), it only requires fewer parameters compared to the GA-based approaches. Since the traditional PSO mechanism is used to handle the continuous problem, in this paper, the discrete PSO is adopted to encode the particles as the binary variables. An efficient PSO-based algorithm, namely HUIM-BPSO, is proposed to efficiently find HUIs. The designed HUIM-BPSO algorithm finds the high-transaction-weighted utilization 1-itemsets (1-HTWUIs) as the size of the particles based on transaction-weighted utility (TWU) model, which can greatly reduce the combinational problem in evolution process. The sigmoid function is adopted in the updating process of the particles for the designed HUIM-BPSO algorithm. An OR/NOR-tree structure is further developed to reduce the invalid combinations for discovering HUIs. Substantial experiments on real-life datasets show that the proposed algorithm outperforms the other heuristic algorithms for mining HUIs in terms of execution time, number of discovered HUIs, and convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
185. Two-stage credit rating prediction using machine learning techniques.
- Author
-
Wu, Hsu-Che, Hu, Ya-Han, and Huang, Yen-Hao
- Subjects
CREDIT ratings ,SUPERVISED learning ,FINANCIAL institutions ,CREDIT risk ,DATA mining ,FEATURE selection ,PREDICTION models - Abstract
Purpose – Credit ratings have become one of the primary references for financial institutions to assess credit risk. Conventional credit rating approaches mainly concentrated on two-class classification (i.e. good or bad credit), which lacks adequate precision to perform credit risk evaluations in practice. In addition, most of previous researches directly focussed on employing various data mining techniques, but rare studies discussed the influence of data preprocessing before classifier construction. The paper aims to discuss these issues. Design/methodology/approach – This study considers nine-class classification (i.e. nine credit risk level) to credit rating prediction. For the development of more accurate classifiers, the paper adopts two-stage analysis, which integrates multiple data preprocessing and supervised learning techniques. Specifically, the first stage applies feature selection, data clustering, and data resampling methods to preprocess the data, and then the second stage utilizes several classification techniques and classifier ensembles to construct prediction models. Findings – The results show that Bagging-DT with data resampling method achieves excellent accuracy (82.96 percent), indicating that the proposed two-stage prediction model is better than conventional one-stage models. Originality/value – Practical implication of this study can lower credit rating expenses and also allow corporations to gain credit rating information instantly. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
186. Data Mining and Knowledge Management Application to Enhance Business Operations: An Exploratory Study
- Author
-
Mahmood, Zeba, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Arai, Kohei, editor, Kapoor, Supriya, editor, and Bhatia, Rahul, editor
- Published
- 2019
- Full Text
- View/download PDF
187. Advances in Interdisciplinary Research in Engineering and Business Management
- Author
-
P. K. Kapur, Gurinder Singh, Saurabh Panwar, P. K. Kapur, Gurinder Singh, and Saurabh Panwar
- Subjects
- Information technology, Knowledge management, Data mining, Computer software--Reliability
- Abstract
The volume contains latest research on software reliability assessment, testing, quality management, inventory management, mathematical modeling, analysis using soft computing techniques and management analytics. It links researcher and practitioner perspectives from different branches of engineering and management, and from around the world for a bird's eye view on the topics. The interdisciplinarity of engineering and management research is widely recognized and considered to be the most appropriate and significant in the fast changing dynamics of today's times. With insights from the volume, companies looking to drive decision making are provided actionable insight on each level and for every role using key indicators, to generate mobile-enabled scorecards, time-series based analysis using charts, and dashboards. At the same time, the book provides scholars with a platform to derive maximum utility in the area by subscribing to the idea of managing business through performanceand business analytics.
- Published
- 2021
188. KNOWLEDGE DISCOVERY FROM AN ERP DATABASE IN THE CONTEXT OF NEW PRODUCT DEVELOPMENT.
- Author
-
Relich, Marcin
- Subjects
KNOWLEDGE management ,DATA mining ,PROJECT management ,ENTERPRISE resource planning ,INDUSTRIAL management - Abstract
Copyright of Business Informatics / Informatyka Ekonomiczna is the property of Uniwersytet Ekonomiczny we Wroclawiu and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2013
189. Classification of human resources based on measurement of tacit knowledgeAn empirical study in Iran.
- Author
-
Jafari, Mostafa, Akhavan, Peyman, and Nourizadeh, Mozhden
- Subjects
PERSONNEL management ,KNOWLEDGE management research ,TACIT knowledge ,REPERTORY grid technique ,DATA mining ,AUTOMOBILE industry - Abstract
Purpose – The purpose of this paper is to investigate employees of an organization in order to evaluate and classify them based on their tacit knowledge. Therefore, in this paper, staff's tacit knowledge is measured at the individual level, in automotive sector. Design/methodology/approach – Repertory grid technique has been used as a mechanism to aid the elicitation and evaluation of individuals' personal constructs. Then, Pathfinder analysis was carried out to examine the individual's knowledge structure. The similarities between knowledge structure of expert and novice was measured by the set-theoretic index C, in order to classify staff and ultimately, for analysis of each group, Idiogrid software was used. Findings – Based on closeness index, all employees were classified into four categories. Respondent's perceptions were evaluated by comparing the mental model graphs of the individuals in different categories with experts' graphs. Ultimately, the best or the most effective HR practices were determined for managing knowledge workers. Research limitations/implications – This study may help HRM department to differentiate between low and high-performing employees for determining the most effective HR practices to manage staff. Furthermore, it has significant implications in team building and designing a knowledge map within organizations. Originality/value – This paper reveals effective linkages between human resource management and knowledge management. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
190. A KNOWLEDGE DOCUMENT STRUCTURED SUMMARIZATION MODEL.
- Author
-
Shih-Ting Yang and Yu-Ting Gong
- Subjects
SEARCH engines ,ENTERPRISE resource planning ,BUSINESS planning ,BUSINESS intelligence ,INDUSTRIAL safety ,INDUSTRIAL hygiene - Abstract
It is a common practice to acquire information and knowledge from the Internet; thus, keyword searching, document classification and other technologies have been developed to facilitate document searching. Although the search engines can narrow down the scope of search, knowledge demanders without domain knowledge in the specific fields need to continuously search and receive feedbacks. Hence, this paper develops a Knowledge Structured Document Summarization model to analyze the ergonomic technology reports from the website of "Institute of Occupational Safety and Health". Then the expressions and domain vocabulary of knowledge documents can be captured to develop the domain vocabulary database via Knowledge Document Analysis (KDA) module. Secondly, through the Conceptual Sentence Acquisition (CSA) module, the conceptual or representative sentences of domain documents can be derived and serve as candidate sentences for structured summarization. Finally, the Document Structured Summarization (DSS) module is used to calculate and retrieve representative sentences of the documents and integrate them into document abstract for knowledge demanders. That is, through this model, knowledge demanders can directly read the desired parts according to problems to ensure demanders can find document they want within a short time. In addition, a web-based system is developed based on the proposed model. Finally, the improvement reports (knowledge documents) collected from the "Institute of Occupational Safety and Health" are used for verification and the kernel modules of the system are applied to demonstrate feasibility of the proposed methodology and the developed system. [ABSTRACT FROM AUTHOR]
- Published
- 2013
191. Rolling element bearing fault recognition approach based on fuzzy clustering bispectrum estimation.
- Author
-
Liu, W.Y. and Han, J.G.
- Subjects
MATHEMATICAL optimization ,FUZZY algorithms ,DATABASE searching ,DATA mining ,ONLINE data processing ,KNOWLEDGE management - Abstract
A rolling element bearing fault recognition approach is proposed in this paper. This method combines the basic Higher-order spectrum (HOS) theory and fuzzy clustering method in data mining area. In the first step, all the bispectrum estimation results of the training samples and test samples are turned into binary feature images. Secondly, the binary feature images of the training samples are used to construct object templates including kernel images and domain images. Every fault category has one object templates. At last, by calculating the distances between test samples' binary feature images and the different object templates, the object classification and pattern recognition can be effectively accomplished. Bearing is the most important and much easier to be damaged component in rotating machinery. Furthermore, there exist large amounts of noise jamming and nonlinear coupling components in bearing vibration signals. The Higher Order Cumulants (HOC), which can quantitatively describe the nonlinear characteristic signals with close relationship between the mechanical faults, is introduced in this paper to de-noise the raw bearing vibration signals and obtain the bispectrum estimation pictures. In the experimental part, the rolling bearing fault diagnosis experiment results proved that the classification was completely correct. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
192. Quantitative Bio-Medical Data Analysis and Visualization Using Data Mining and Text Mining Approaches.
- Author
-
Krishnaiah, V. V. Jaya Rama, Rao, K. Ramchand H., and Rao, K. Mrithyunjaya
- Subjects
DATA mining ,DATABASE searching ,DATABASES ,DATA analysis ,MEDICAL research ,KNOWLEDGE representation (Information theory) - Abstract
In view of today's information avalanche, recent progress in data mining research has led to the development of numerous efficient and scalable methods for mining interesting patterns in large databases. The focus of data analysis and data mining tools in biomedical research highlights the current state of research in the key biomedical research areas such as, medical informatics, public health informatics and biomedical imaging. Medicine and biomedical sciences have become data-intensive fields, which, at the same time, enable the application of data-driven approaches and require sophisticated data analysis and data mining methods. Biomedical informatics provides a proper interdisciplinary context to integrate data and knowledge when processing available information, with the aim of giving effective decision-making support in clinics and translational research. Biomedical text data mining is concerned with automated methods for analyzing the content of these documents and discovering and extracting the knowledge in them. Numerical data mining has long been used to uncover patterns in numerical data and make predictions based on those patterns. Text data mining builds on the success of numerical data mining but presents additional challenges. The amount of available biomedical data continues to grow in an exponential rate; however, the impact of utilizing such resources remains minimal. The development of innovative tools to integrate, analyze and mine such data sources is a key step towards achieving larger impact levels. In this Paper, we analyze how data mining may help bio-medical data analysis and outlined some research problems that may motivate the further developments of data mining tools for bio-data analysis and representation of Knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2012
193. A self-organization mining based hybrid evolution learning for TSK-type fuzzy model design.
- Author
-
Lin, Sheng-Fuu, Chang, Jyun-Wei, and Hsu, Yung-Chi
- Subjects
FUZZY systems ,DATA mining ,DECISION support systems ,KNOWLEDGE management ,GENETIC algorithms - Abstract
In this paper, a self-organization mining based hybrid evolution (SOME) learning algorithm for designing a TSK-type fuzzy model (TFM) is proposed. In the proposed SOME, group-based symbiotic evolution (GSE) is adopted in which each group in the GSE represents a collection of only one fuzzy rule. The proposed SOME consists of structure learning and parameter learning. In structure learning, the proposed SOME uses a two-step self-organization algorithm to decide the suitable number of rules in a TFM. In parameter learning, the proposed SOME uses the data mining based selection strategy and data mining based crossover strategy to decide groups and parental groups by the data mining algorithm that called frequent pattern growth. Illustrative examples were conducted to verify the performance and applicability of the proposed SOME method. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
194. Talkoot: software tool to create collaboratories for earth science.
- Author
-
Ramachandran, Rahul, Maskey, Manil, Kulkarni, Ajinkya, Conover, Helen, Nair, U., and Movva, Sunil
- Subjects
WORKFLOW ,DATA mining ,KNOWLEDGE management ,ORGANIZATION ,EARTH sciences - Abstract
'Open science,' where researchers share and publish every element of their research process in addition to the final results, can foster novel ways of collaboration among researchers and has the potential to spontaneously create new virtual research collaborations. Based on scientific interest, these new virtual research collaborations can cut across traditional boundaries such as institutions and organizations. Advances in technology allow for software tools that can be used by different research groups and institutions to build and support virtual collaborations and infuse open science. This paper describes Talkoot, a software toolkit designed and developed by the authors to provide Earth Science researchers a ready-to-use knowledge management environment and an online platform for collaboration. Talkoot allows Earth Science researchers a means to systematically gather, tag and share their data, analysis workflows and research notes. These Talkoot features are designed to foster rapid knowledge sharing within a virtual community. Talkoot can be utilized by small to medium sized groups and research centers, as well as large enterprises such a national laboratories and federal agencies. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
195. Intelligent Agent-Based Knowledge Management and Knowledge Discovery.
- Author
-
Devadas, T. Joshva and Ganesan, R.
- Subjects
KNOWLEDGE management ,INTELLIGENT agents ,DATA mining ,COMPUTER software ,ARTIFICIAL intelligence - Abstract
Knowledge Management is the process of collecting information which supports to create disseminate and utilize the knowledge between the individuals, groups within an organization or independent organization. The process of knowledge management involves various steps such as identifying, collecting, storing, sharing, applying, creating and selling knowledge. Agents are autonomous intelligent computer programs that perform tasks on behalf of the user or user-initiated process by using its knowledge base. Agents are designed and developed in such a way that they are goal-oriented, adaptive, reactive and mobile. Agents incorporated in the process of knowledge management should be capable of communicating with other agents by using its common characteristics namely cooperate, coordinate and collaborate. These common characteristics will improve the performance of knowledge management process by helping the agents to discover the knowledge from it. Agent uses its learning characteristics to update its knowledge base whenever it encounters new information from the organization. Agent's uses its dynamic characteristics for knowledge sharing among users in an organization. Knowledge sharing is done both at work group and at company level. During this process agent roles associated with the knowledge management and knowledge discovery are identified. This paper aims at describing such agents and their roles. [ABSTRACT FROM AUTHOR]
- Published
- 2012
196. Mining sequential patterns with extensible knowledge representation.
- Author
-
Gao, Shang, Alhajj, Reda, Rokne, Jon, and Guan, Jiwen
- Subjects
DATA analysis ,DATA mining ,KNOWLEDGE representation (Information theory) ,KNOWLEDGE management ,INFORMATION theory - Abstract
Mining sequential patterns is an important activity in computerized data analysis, and further analysis of discovered patterns can lead to more important findings in a post mining stage. Methods that integrate the traditional mining tasks with a knowledge representation facilitating such as post pruning are therefore needed. In this paper a set-based approach to mining frequent sequential patterns in customer transactional databases is described which includes an extensible knowledge representation. This knowledge representation is a byproduct of the set-based approach which contributes to facilitate post data mining and analysis. The proposed approach employs a set based knowledge representation and improves the performance of Apriori based algorithms while preserving a complete set of sequential patterns. It takes advantage of an incremental mining methodology and provides a rich knowledge representation. Performance results of the proposed approach are compared to the performance of existing sequential pattern mining algorithms including GSP (Generalized Sequential Pattern) and PrefixSpan. The effective knowledge representation inferred from the set based approach can be extended to other data mining tasks and data analysis models. Such extension is demonstrated by two instances of enriched knowledge representations in sequential databases, namely Set Occurrence Tables and Set Distance computations, along with their use of association rules generation, feature selection and ad hoc analysis in the post mining stage. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
197. KNOWLEDGE BASE IN SOFTWARE PROJECT ESTIMATION.
- Author
-
MIŁOSZ, MAREK and BORYS, MAGDALENA
- Subjects
KNOWLEDGE base ,COMPUTER software management ,INFORMATION technology projects ,DATA mining ,KNOWLEDGE management - Abstract
Copyright of Studia i Materialy Polskiego Stowarzyszenia Zarzadzania Wiedza / Studies & Proceedings Polish Association for Knowledge Management is the property of Polish Society of Knowledge Management and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2011
198. DATA MINING V PRAXI: SEGMENTACE ZÁKAZNÍKU DLE NÁKUPNÍHO CHOVÁNÍ.
- Author
-
Poláčková, Julie
- Subjects
- *
DATA mining , *DECISION making , *PERSONNEL management , *KNOWLEDGE management , *DATABASES - Abstract
The paper focuses on the usage of data-mining techniques as a support tool for decision making. This paper describes the mining of hidden and potentially useful information from databases using data mining methods. These methods, sometimes called as techniques for knowledge discovering, help users, mostly managers, to make qualified decisions in the organization. The aim of the process of knowledge management is not only to collect information, but to transform it into knowledge and use it in a decision-making process. The purpose of this paper was to find and evaluate the different methodological approaches appropriate for customer segmentation. Various data mining techniques were used for demonstration of customer segmentation according to their purchasing behavior within a selected hypermarket. The following techniques were used for clustering: K-means clustering method, Two Step clustering method and Self Organizing Maps. The quality of final models was evaluated by Silhouette measure. It combines the principles of clusters separation and cohesion. Data mining model was constructed from approximately 60 thousand transaction records. Only the food records were selected for the analysis. The paper also examined the effect of the number of dimensions to the clustering. The original variables were reduced into a smaller number of uncorrelated principal components. These components were used for construction of a scatter plot to check the homogeneity of clusters. The results of this analysis confirmed that the reduction of dimensionality is an useful device for the evaluation of generated clusters. [ABSTRACT FROM AUTHOR]
- Published
- 2011
199. A Theoretical Framework for Comparison of Data Mining Techniques.
- Author
-
Taneja, Abhishek and Chauhan, R. K.
- Subjects
DATA mining ,KNOWLEDGE management ,DECISION making ,REGRESSION analysis ,RIDGE regression (Statistics) - Abstract
In recent years data mining has become one of the most important tool for extracting, manipulating data and establishing patterns in order to produce useful knowledge for decision making. Nearly all the worldly activities have ways to record the information but are handicapped by not having the right tools to use this information to embark upon the fears of future. In data mining the choice of technique depends upon the perceptive of the analyst. It is a daunting task to find out which data mining technique is suitable for what kind of underlying dataset. In this process lot of time is wasted to find the best/suitable technique which best fits the underlying dataset. This paper proposes a theoretical composition for comparison of different linear data mining techniques in a bid to find the best technique which saves lot of time which is usually wasted in bagging, boosting, and meta-learning. [ABSTRACT FROM AUTHOR]
- Published
- 2011
200. Inductive database languages: requirements and examples.
- Author
-
Romei, Andrea and Turini, Franco
- Subjects
DATABASES ,DATABASE management ,PROGRAMMING languages ,DATA mining ,KNOWLEDGE management - Abstract
Inductive databases (IDBs) represent a database perspective on Knowledge discovery in databases (KDD). In an IDB, the KDD application can express both queries capable of accessing and manipulating data, and queries capable of generating, manipulating, and applying patterns allowing to formalize the notion of mining process. The feature that makes them different from other data mining applications is exactly the idea of looking at the support for knowledge discovery as an extension of the query process. This paper draws a list of desirable properties to be taken into account in the definition of an IDB framework. They involve several dimensions, such as the expressiveness of the language in representing data and models, the closure principle, the capability to provide a support for an efficient algorithm programming. These requirements are a basis for a comparative study that highlights strengths and weaknesses of existing IDB approaches. The paper focuses on the SQL-based ATLaS language/system, on the logic-based $${\mathcal{LDL}++}$$ language/system, and on the XML-based KDDML language/system. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.