186 results on '"Meersman, R."'
Search Results
2. Business semantics management: A case study for competency-centric HRM
- Author
-
De Leenheer, P., Christiaens, S., and Meersman, R.
- Published
- 2010
- Full Text
- View/download PDF
3. Baroreceptor sensitivity after Valsalva maneuver in women with chronic obstructive pulmonary disease
- Author
-
Bartels, Matthew N., Gates, G. J., Downey, J. A., Armstrong, H. F., and De Meersman, R. E.
- Published
- 2012
- Full Text
- View/download PDF
4. Exercise training favourably affects autonomic and blood pressure responses during mental and physical stressors in African-American men
- Author
-
Bond, V, Bartels, M N, Sloan, R P, Millis, R M, Zion, A S, Andrews, N, and De Meersman, R E
- Published
- 2009
- Full Text
- View/download PDF
5. A Conceptual Model of the Blockchain
- Author
-
Bollen, Peter, Debruyne, C, Panetto, H, Guedria, W, Bollen, P, Ciuciu, Karabatis, G, Meersman, R, Organisation,Strategy & Entrepreneurship, and RS: GSBE other - not theme-related research
- Subjects
Engineering ,Blockchain ,business.industry ,As is ,Foundation (engineering) ,Conceptual model (computer science) ,Hyperledger Fabric ,World state ,Blueprint ,Ledger ,Fact-based modeling ,Software engineering ,business - Abstract
Hyperledger Fabric is a very large project under the umbrella of the Linux Foundation, with hundreds of developers involved. In this paper we will illustrate how the application of fact-based modeling will help us in understanding some basic features of the blockchain concept as is used in Hyperledger Fabric (HLF) and that it can serve as a conceptual blueprint of HLF for all involved to use.
- Published
- 2020
6. Enhancing process models to improve business performance
- Author
-
Dees, M., de Leoni, M., Mannhardt, F., Panetto, H., Debruyne, C., Gaaloul, W., Papazoglou, M., Paschke, A., Agostino Ardagna, C., Meersman, R., and Process Science
- Subjects
Process management ,Process modeling ,Computer science ,Process (engineering) ,Business rule ,Business process ,Computer Science (all) ,Process mining ,020207 software engineering ,02 engineering and technology ,Theoretical Computer Science ,Conformance checking ,Work (electrical) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Relevance (information retrieval) - Abstract
Process mining is not only about discovery and conformance checking of business processes. It is also focused on enhancing processes to improve the business performance. While from a business perspective this third main stream is definitely as important as the others if not even more, little research work has been conducted. The existing body of work on process enhancement mainly focuses on ensuring that the process model is adapted to incorporate behavior that is observed in reality. It is less focused on improving the performance of the process. This paper reports on a methodology that creates an enhanced model with an improved performance level. The enhancements of the model limit incorporated behavior to only those parts that do not violate any business rules. Finally, the enhanced model is kept as close to the original model as possible. The practical relevance and feasibility of the methodology is assessed through two case studies. The result shows that the process models improved through our methodology, in comparison with state-of the art techniques, have improved KPI levels while still adhering to the desired prescriptive model.
- Published
- 2017
- Full Text
- View/download PDF
7. THE EFFECTS OF AN 8-WEEK OUTPATIENT PULMONARY REHABILITATION PROGRAM ON HEART RATE VARIABILITY AND RIGHT HEART FUNCTION IN PATIENTS WITH CHRONIC OBSTRUCTIVE PULMONARY DISEASE.
- Author
-
Gallucci, M, Lichtman, S, Pellicone, J, King, M, Wanstall, D, Domitrovich, P, Stein, P, and De Meersman, R
- Published
- 2003
8. A home-based resistance-training program using elastic bands for elderly patients with orthostatic hypotension
- Author
-
Zion, A. S., De Meersman, R., Diamond, B. E., and Bloomfield, D. M.
- Published
- 2003
- Full Text
- View/download PDF
9. AUTONOMIC RESPONSES TO TILT IN ATHLETES
- Author
-
Zion, A S., De Meersman, R E., Diamond, B E., and Bloomfield, D M.
- Published
- 2002
10. A generic framework for context-aware process performance analysis
- Author
-
Hompes, B.F.A., Buijs, J.C.A.M., van der Aalst, W.M.P., Debruyne, C., Panetto, H., Meersman, R., Dillon, T., Kuhn, E., O'Sullivan, G., Agostino Ardagna, C., and Process Science
- Subjects
Process modeling ,Context-aware ,Business process ,Process (engineering) ,Computer science ,Performance analysis ,Process mining ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Conformance checking ,Business process discovery ,010104 statistics & probability ,0202 electrical engineering, electronic engineering, information engineering ,Root cause analysis ,020201 artificial intelligence & image processing ,Performance indicator ,Data mining ,0101 mathematics ,computer - Abstract
Process mining combines model-based process analysis with data-driven analysis techniques. The role of process mining is to extract knowledge and gain insights from event logs. Most existing techniques focus on process discovery (the automated extraction of process models) and conformance checking (aligning observed and modeled behavior). Relatively little research has been performed on the analysis of business process performance. Cooperative business processes often exhibit a high degree of variability and depend on many factors. Finding root causes for inefficiencies such as delays and long waiting times in such flexible processes remains an interesting challenge. This paper introduces a novel approach to analyze key process performance indicators by considering the process context. A generic context-aware analysis framework is presented that analyzes performance characteristics from multiple perspectives. A statistical approach is then utilized to evaluate and find significant differences in the results. Insights obtained can be used for finding high-impact points for optimization, prediction, and monitoring. The practical relevance of the approach is shown in a case study using real-life data.
- Published
- 2016
11. The semantics of hybrid process models
- Author
-
Slaats, T., Schunselaar, D.M.M., Maggi, F.M., Reijers, H.A., Debruyne, C., Panetto, H., Meersman, R., Dillon, T., Kuhn, E., O'Sullivan, D., Agostino Ardagna, C., Debruyne, Christophe, Panetto, Hervé, Meersman, Robert, Dillon, Tharam, Kühn, Eva, O'Sullivan, Declan, Ardagna, Claudio Agostino, Software and Sustainability (S2), Business Informatica, Network Institute, and Process Science
- Subjects
Process modeling ,Computer science ,Programming language ,Formal semantics (linguistics) ,02 engineering and technology ,Petri net ,Business process modeling ,computer.software_genre ,Notation ,Semantics ,020204 information systems ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,0202 electrical engineering, electronic engineering, information engineering ,Hybrid process model ,020201 artificial intelligence & image processing ,Declare ,SDG 7 - Affordable and Clean Energy ,computer - Abstract
In the area of business process modelling, declarative notations have been proposed as alternatives to notations that follow the dominant, imperative paradigm. Yet, the choice between an imperative or declarative style of modelling is not always easy to make. Instead, a mixture of these styles is sometimes preferable. This observation has underpinned recent calls for so-called hybrid process modelling notations. In this paper, we present a formal semantics for these. In our proposal, a hybrid process model is hierarchical, where each of its sub-processes may be specified in either an imperative or declarative fashion. The semantics we provide will allow modelling communities to build on the benefits of existing imperative and declarative modelling notations, instead of spending their energy on inventing new ones.
- Published
- 2016
- Full Text
- View/download PDF
12. Enhancing Business Process Flexibility by Flexible Batch Processing
- Author
-
Karastoyanova, Dimka, Pufahl, Luise, Panetto, H, Debruyne, C, Proper, H, Ardagna, C., Roman, D, Meersman, R, and Information Systems
- Subjects
Flexibility (engineering) ,Business process ,business.industry ,Process (engineering) ,Computer science ,batch activities ,separation of concerns ,Separation of concerns ,02 engineering and technology ,Manufacturing engineering ,Business process management ,modular architecture ,020204 information systems ,business processes ,flexibility strategies ,0202 electrical engineering, electronic engineering, information engineering ,Batch processing ,Systems architecture ,020201 artificial intelligence & image processing ,business ,Adaptation (computer science) - Abstract
Business Process Management is a powerful approach forthe automation of collaborative business processes. Recently conceptshave been introduced to allow batch processing in business processesaddressing the needs of different industries. The existing batch activityconcepts are limited in their flexibility. In this paper we contribute differentstrategies for modeling and executing processes including batch workto improve the flexibility 1) of business processes in general and 2) of thebatch activity concept. The strategies support different flexibility aspects(i.e., variability, looseness, adaptation, and evolution) of batch activities.The strategies provide a systematic approach to categorize existing andfuture batch-enabled BPM systems. Furthermore, the paper provides asystem architecture independent from existing BPM systems, which allowsfor the support of all the strategies. The architecture can be usedwith different process languages and existing execution environments ina non-intrusive manner.
- Published
- 2018
13. Ontologies for commitment-based smart contracts
- Author
-
de Kruijff, Joost, Weigand, Hans, Panetto, H, Debruyne, C., Gaaloul, W., Papazoglou, M., Paschke, A., Ardagna, C.A., Meersman, R., Department of Management, Research Group: Information & Supply Chain Management, and Center Ph. D. Students
- Subjects
enterprise ontology ,commitment-based smart contracts ,model driven architecture ,commitments ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,blockchan - Abstract
Smart contracts gain rapid exposure since the inception of blockchain technology. Yet there is no unified ontology for smart contracts. Being categorized as coded contracts or substitutes of conventional legal contracts, there is a need to reduce the conceptual ambiguity of smart contracts. We applied enterprise ontology and model driven architectures to abstract smart contracts at the essential, infological and datalogical level to explain the system behind computation and platform independent smart contracts rather than its functional behavior. This conceptual paper introduces commitment-based smart contracts, in which a contract is viewed as a business exchange consisting of a set of reciprocal commitments. A smart contract ensures the automated execution of most of these commitments.
- Published
- 2017
14. Methodological support for coordinating tasks in global product software engineering
- Author
-
Widiyatmoko, Carolus, Overbeek, S.J., Brinkkemper, S., Panetto, H., Debruyne, C., Gaaloul, W., Papazoglou, M., Paschke, A., Ardagna, C.A., Meersman, R., Sub Software Production, and Software Production
- Subjects
Social software engineering ,Software Engineering Process Group ,Resource-oriented architecture ,Computer science ,business.industry ,Global software engineering ,Software producing organization ,Software development ,Task coordination ,Software construction ,Component-based software engineering ,Personal software process ,Software requirements ,Software engineering ,business ,Design science ,Method engineering - Abstract
Distributing software processes by software producing organizations (SPOs) is emerging increasingly due to benefits that global software engineering (GSE) brings in terms of cost reduction, leveraging competencies, and market expansion. However, these organizations are facing communication and project control issues that can slow down the overall organization performance. Therefore, SPOs should be able to manage inter-dependencies among the tasks distributed to the globally dispersed teams. We analyze existing works and product software companies’ best practices in coordinating tasks in GSE. This paper specifically focuses on constructing methodological support for task coordination that can be influenced by the situational factors at the companies. The support comprises a framework and a method developed by using a method engineering approach. We introduce the framework that depicts the aspects that should be examined by companies and the method that elaborates the practices to guide companies to coordinate tasks in GSE projects. The validation results show that the framework and the method are accepted by experts regarding completeness and applicability to help SPOs in managing coordination among globally distributed teams.
- Published
- 2017
15. Finding process variants in event logs (short paper)
- Author
-
Bolt Iriondo, A.J., van der Aalst, W.M.P., de Leoni, M., Panetto, H., Debruyne, C., Gaaloul, W., Papazoglou, M., Paschke, A., Agostino Ardagna, C., Meersman, R., and Process Science
- Subjects
Event data ,Process variant detection ,Process mining - Abstract
The analysis of event data is particularly challenging when there is a lot of variability. Existing approaches can detect variants in very specific settings (e.g., changes of control-flow over time), or do not use statistical testing to decide whether a variant is relevant or not. In this paper, we introduce an unsupervised and generic technique to detect significant variants in event logs by applying existing, well-proven data mining techniques for recursive partitioning driven by conditional inference over event attributes. The approach has been fully implemented and is freely available as a ProM plugin. Finally, we validated our approach by applying it to a real-life event log obtained from a multinational Spanish telecommunications and broadband company, obtaining valuable insights directly from the event data.
- Published
- 2017
16. On the role of fitness, precision, generalization and simplicity in process discovery
- Author
-
Buijs, J.C.A.M., Dongen, van, B.F., Aalst, van der, W.M.P., Meersman, R., and Process Science
- Subjects
Process modeling ,Event (computing) ,business.industry ,Computer science ,Generalization ,media_common.quotation_subject ,Work in process ,Machine learning ,computer.software_genre ,Measure (mathematics) ,Business process discovery ,Quality (business) ,Simplicity ,Artificial intelligence ,business ,computer ,media_common - Abstract
Process discovery algorithms typically aim at discovering process models from event logs that best describe the recorded behavior. Often, the quality of a process discovery algorithm is measured by quantifying to what extent the resulting model can reproduce the behavior in the log, i.e. replay fitness. At the same time, there are many other metrics that compare a model with recorded behavior in terms of the precision of the model and the extent to which the model generalizes the behavior in the log. Furthermore, several metrics exist to measure the complexity of a model irrespective of the log. In this paper, we show that existing process discovery algorithms typically consider at most two out of the four main quality dimensions: replay fitness, precision, generalization and simplicity. Moreover, existing approaches can not steer the discovery process based on user-defined weights for the four quality dimensions. This paper also presents the ETM algorithm which allows the user to seamlessly steer the discovery process based on preferences with respect to the four quality dimensions. We show that all dimensions are important for process discovery. However, it only makes sense to consider precision, generalization and simplicity if the replay fitness is acceptable.
- Published
- 2013
- Full Text
- View/download PDF
17. Online discovery of cooperative structures in business processes
- Author
-
van Zelst, S. J., van Dongen, B.F., van der Aalst, W.M.P., Debruyne, C., Panetto, H., Meersman, R., Dillon, T., Kühn, E., O'Sullivan, D., Agostino Ardagna, C., and Process Science
- Subjects
Focus (computing) ,Process enhancement ,Computer science ,Business process ,Event (computing) ,Process mining ,02 engineering and technology ,Network dynamics ,Data science ,Domain (software engineering) ,Business process discovery ,Open source ,Event streams ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Cooperative resource networks - Abstract
Process mining is a data-driven technique aiming to provide novel insights and help organizations to improve their business processes. In this paper, we focus on the cooperative aspect of process mining, i.e., discovering networks of cooperating resources that together perform processes. We use online streams of events as an input rather than event logs, which are typically used in an off-line setting. We present the Online Cooperative Network (OCN) framework, which defines online cooperative resource network discovery in a generic way. A prototypical implementation of the framework is available in the open source process mining toolkit ProM. By means of an empirical evaluation we show the applicability of the framework in the streaming domain. The techniques presented operate in a real time fashion and are able to handle unlimited amounts of data. Moreover, the implementation allows to visualize network dynamics, which helps in gaining insights in changes in the execution of the underlying business process.
- Published
- 2016
18. The Interplay of Mandatory Role and Set-Comparison constraints
- Author
-
Bollen, P.W.L., Herrero, P., Panetto, H., Meersman, R., Dillon, T., Organisation,Strategy & Entrepreneurship, and RS: GSBE ERD
- Subjects
Set (abstract data type) ,Subject-matter expert ,Focus (computing) ,Theoretical computer science ,Computer science - Abstract
In this paper we will focus on the interplay of mandatory role and set-comparison (equality-, subset- and exclusion-) constraints in fact based modeling. We will present an algorithm that can be used to derive mandatory role constraints in combination with non-implied set-comparison constraints as a result of the acceptance or rejection of real-life user examples by the domain expert.
- Published
- 2012
- Full Text
- View/download PDF
19. Efficient RDFS entailment in external memory
- Author
-
Haffmans, W.J., Fletcher, G.H.L., Meersman, R., Dillon, T., Herrero, P., and Database Group
- Subjects
Computer science ,RDF Schema ,Search engine indexing ,Internal memory ,Graph (abstract data type) ,Rdf graph ,computer.file_format ,RDF ,computer ,Logical consequence ,Algorithm ,Auxiliary memory - Abstract
The entailment of an RDF graph under the RDF Schema standard can easily become too costly to compute and maintain. It is often more desirable to compute on-demand whether a triple exists in the entailment. This is a non-trivial task likely to incur I/O costs, since RDF graphs are often too large to fit in internal memory. As disk I/O is expensive in terms of time, I/O costs should be minimized to achieve better performance. We investigate three physical indexing methods for RDF storage on disk, comparing them using the state of the art RDF Schema entailment algorithm of Muñoz et al. In particular, the I/O behavior during entailment checking over these graph representations is studied. Extensive empirical analysis shows that an enhanced version of the state of the art indexing method, which we propose here, yields in general the best I/O performance.
- Published
- 2011
- Full Text
- View/download PDF
20. A derivation procedure for set-comparison constraints in fact-based modeling
- Author
-
Bollen, P.W.L., Meersman, R., Dillon, T., Herrero, P., Organisation,Strategy & Entrepreneurship, RS: GSBE, and RS: GSBE ERD
- Subjects
Set (abstract data type) ,Subject-matter expert ,Focus (computing) ,Theoretical computer science ,Computer science ,Data mining ,computer.software_genre ,computer ,Conceptual schema ,Derivation procedure - Abstract
In this paper we will address the conceptual schema design procedure (CSDP) in fact-based modeling. We will focus on the modeling procedure of 'cook-book' for deriving set-comparison constraints. We will give an algorithm that can be applied by an analyst in an analyst-user dialogue in which all set-comparison constraints can be derived as a result of the acceptance or rejection of real-life user examples by the domain expert.
- Published
- 2011
- Full Text
- View/download PDF
21. Boosting Web Intrusion Detection Systems by Inferring Positive Signatures
- Author
-
Bolzoni, D., Etalle, S., Meersman, R., Tari, Z., Mathematics and Computer Science, and Security
- Subjects
Boosting (machine learning) ,business.industry ,Computer science ,Anomaly-based intrusion detection system ,SCS-Cybersecurity ,Intrusion detection system ,computer.software_genre ,Web application security ,Regular language ,False positive paradox ,Web application ,Anomaly detection ,Data mining ,business ,computer - Abstract
We present a new approach to anomaly-based network intrusion detection for web applications. This approach is based on dividing the input parameters of the monitored web application in two groups: the "regular" and the "irregular" ones, and applying a new method for anomaly detection on the "regular" ones based on the inference of a regular language. We support our proposal by realizing Sphinx, an anomaly-based intrusion detection system based on it. Thorough benchmarks show that Sphinx performs better than current state-of-the-art systems, both in terms of false positives/false negatives as well as needing a shorter training period.
- Published
- 2008
- Full Text
- View/download PDF
22. A model-driven approach for the specification and analysis of access control policies
- Author
-
Fabio Massacci, Zannone, Nicola, Meersman, R., and Tari, Z.
- Subjects
Computer science ,business.industry ,Modeling language ,Process (engineering) ,Role-based access control ,Systems engineering ,System requirements specification ,Access control ,Software engineering ,business - Abstract
The last years have seen the definition of many languages, models and standards tailored to specify and enforce access control policies, but such frameworks do not provide methodological support during the policy specification process. In particular, they do not provide facilities for the analysis of the social context where the system operates. In this paper we propose a model-driven approach for the specification and analysis of access control policies. We build this framework on top of SI*, a modeling language tailored to capture and analyze functional and security requirements of socio-technical systems. The framework also provides formal mechanisms to assist policy writers and system administrators in the verification of access control policies and of the actual user-permission assignment.
- Published
- 2008
- Full Text
- View/download PDF
23. Modeling data federations in ORM
- Author
-
Balsters, Herman, Halpin, Terry, Meersman, R, Tari, Z, and Herrero, P
- Subjects
Information retrieval ,Computer science ,InformationSystems_DATABASEMANAGEMENT ,computer.software_genre ,Conceptual schema ,Data warehouse ,Data modeling ,Set (abstract data type) ,Consistency (database systems) ,Data extraction ,Star schema ,Schema (psychology) ,Component (UML) ,Global schema ,Data mining ,computer - Abstract
Two major problems in constructing data federations (for example, data warehouses and database federations) concern achieving and maintaining consistency and a uniform representation of the data on the global level of the federation. The first step in creating uniform representations of data is known as data extraction, whereas data reconciliation is concerned with resolving data inconsistencies. Our approach to constructing a global conceptual schema as the result of integrating a collection of (semantically) heterogeneous component schemas is based on the concept of exact views. We show that a global schema constructed in terms of exact views integrates component schemas in such a way that the global schema is populated by exactly those instances allowed by the local schemas (and in special cases, also the other way around). In this sense, the global schema is equivalent to the set of component schemas from which the global schema is derived. This paper describes a modeling framework for data federations based on the Object-Role Modeling (ORM) approach. In particular, we show that we can represent exact views within ORM, providing the means to resolve in a combined setting data extraction and reconciliation problems on the global level of the federation.
- Published
- 2007
- Full Text
- View/download PDF
24. Understanding the occurrence of errors in process models based on metrics
- Author
-
Mendling, J., Neumann, G., Aalst, van der, W.M.P., Meersman, R., Tari, Z., and Information Systems IE&IS
- Subjects
Process modeling ,Business process ,Computer science ,Process (engineering) ,Management science ,media_common.quotation_subject ,Information system ,Sample (statistics) ,Quality (business) ,Business process modeling ,Work in process ,Data science ,media_common - Abstract
Business process models play an important role for the management, design, and improvement of process organizations and process-aware information systems. Despite the extensive application of process modeling in practice, there are hardly empirical results available on quality aspects of process models. This paper aims to advance the understanding of this matter by analyzing the connection between formal errors (such as deadlocks) and a set of metrics that capture various structural and behavioral aspects of a process model. In particular, we discuss the theoretical connection between errors and metrics, and provide a comprehensive validation based on an extensive sample of EPC process models from practice. Furthermore, we investigate the capability of the metrics to predict errors in a second independent sample of models. The high explanatory power of the metrics has considerable consequences for the design of future modeling guidelines and modeling tools.
- Published
- 2007
- Full Text
- View/download PDF
25. Information quality in dynamic networked business process management
- Author
-
Rasouli, M., Eshuis, H., Trienekens, J.J.M., Kusters, R.J., Grefen, P.W.P.J., Devruyne, C., Panetto, H., Meersman, R., Dillon, T., Weichhart, G., An, Y., Ardagna, C.A., Information Systems IE&IS, RS-Research Line Innovation (part of LIRS program), Department Information Science and Business Processes, RS-Research Line Resilience (part of LIRS program), RS-Research Line Learning (part of LIRS program), Debruyne, Christophe, Panetto, Hervé, Meersman, Robert, Dillon, Tharam, Weichhart, Georg, An, Yuan, and Ardagna, Claudio A.
- Subjects
Service (business) ,Competition (economics) ,Business process management ,Knowledge management ,business.industry ,Business process ,Business networking ,Information leakage ,Information quality ,Dynamism ,business - Abstract
The competition in globalized markets forces organizations to provide mass-customized integrated solutions for customers. Mass-customization of integrated solutions by business network requires adaptive interactions between parties to address emerging requirements of customers. These adaptive interactions need to be enabled by dynamic networked business processes (DNBP) that are supported by high quality information. However, the dynamic collaboration between parties can result in information quality (IQ) issues such as information syntactic and semantic misalignment, information leakage, and unclear information ownership. To counter negative consequences of poor IQ on the performance, the orchestrator of business network needs to clearly recognize these IQ issues. In this paper, we develop and evaluate a framework to address potential IQ issues related to DNBP. The development of the framework is based on a three step methodology that includes the characterization of dynamism of networked business processes, the characterization of IQ dimensions, and the exploration of IQ issues. To evaluate the practical significance of the explored IQ issues, we conduct a case study in a service ecosystem that is shaped by a car leasing organization to provide integrated mobility solutions for customers.
- Published
- 2015
- Full Text
- View/download PDF
26. Process mining and verification of properties : an approach based on temporal logic
- Author
-
Aalst, van der, W.M.P., Beer, de, H.T., Dongen, van, B.F., Meersman, R., Tari, Z., and Information Systems IE&IS
- Subjects
business.industry ,Computer science ,Event (computing) ,Process mining ,Petri net ,computer.software_genre ,Business process management ,Business process discovery ,Workflow ,Knowledge extraction ,Linear temporal logic ,Information system ,Temporal logic ,Data mining ,business ,computer - Abstract
Information systems are facing conflicting requirements. On the one hand, systems need to be adaptive and self-managing to deal with rapidly changing circumstances. On the other hand, legislation such as the Sarbanes-Oxley Act, is putting increasing demands on monitoring activities and processes. As processes and systems become more flexible, both the need for, and the complexity of monitoring increases. Our earlier work on process mining has primarily focused on process discovery, i.e., automatically constructing models describing knowledge extracted from event logs. In this paper, we focus on a different problem complementing process discovery. Given an event log and some property, we want to verify whether the property holds. For this purpose we have developed a new language based on Linear Temporal Logic (LTL) and we combine this with a standard XML format to store event logs. Given an event log and an LTL property, our LTL Checker verifies whether the observed behavior matches the (un)expected/(un)desirable behavior.
- Published
- 2005
- Full Text
- View/download PDF
27. On the notion of coupling in communication middleware
- Author
-
Aldred, L., Aalst, van der, W.M.P., Dumas, M., Hofstede, ter, A.H.M., Meersman, R., Tari, Z., and Information Systems IE&IS
- Subjects
Transaction processing ,Computer science ,computer.internet_protocol ,business.industry ,Distributed computing ,Mobile computing ,Loose coupling ,computer.software_genre ,Grid ,Business Process Execution Language ,Software ,Grid computing ,Common Object Request Broker Architecture ,Asynchronous communication ,Middleware (distributed applications) ,Middleware ,Message oriented middleware ,Software architecture ,business ,computer - Abstract
It is well accepted that different types of distributed architectures require different levels of coupling. For example, in client-server and three-tier architectures the application components are generally tightly coupled between them and with the underlying communication middleware. Meanwhile, in off-line transaction processing, grid computing and mobile application architectures, the degree of coupling between application components and with the underlying middleware needs to be minimised along different dimensions. In the literature, terms such as synchronous, asynchronous, blocking, non-blocking, directed, and non-directed are generally used to refer to the degree of coupling required by a given architecture or provided by a given middleware. However, these terms are used with various connotations by different authors and middleware vendors. And while several informal definitions of these terms have been provided, there is a lack of an overarching framework with a formal grounding upon which software architects can rely to unambiguously communicate architectural requirements with respect to coupling. This paper addresses this gap by: (i) identifying and formally defining three dimensions of coupling; (ii) relating these dimensions to existing communication middleware; and (iii) proposing notational elements for representing coupling configurations. The identified dimensions provide the basis for a classification of middleware which can be used as a selection instrument.
- Published
- 2005
- Full Text
- View/download PDF
28. Efficient Processing of Secured XML Metadata
- Author
-
Feng, L., meersman, R, Jonker, Willem, Tari, Z, and Databases (Former)
- Subjects
Information management ,Data element ,Service delivery framework ,computer.internet_protocol ,Computer science ,Knowledge engineering ,Meta Data Services ,Metadata repository ,Metadata ,World Wide Web ,DB-SDM: SECURE DATA MANAGEMENT ,Metadata management ,EWI-8092 ,Information system ,computer ,XML ,Database catalog ,IR-63666 - Abstract
Metadata management is a key issue in intelligent Web-based environments. It plays an important role in a wide spectrum of areas, ranging from semantic explication, information handling, knowledge management, multimedia processing to personalized service delivery. As a result, security issues around metadata management needs to be addressed in order to build trust and confidence to ambient environments. The aim of this paper is to bring together the worlds of security and XML-formatted metadata management in such a way that, on the one hand the requirement on secure metadata management is satisfied, while on the other other hand the efficiency on metadata processing can still be guaranteed. To this end, we develop an effective approach to enable efficient search on encrypted XML metadata. The basic idea is to augment encrypted XML metadata with encodings which characterize the topology and content of every tree-structured XML metadata, and then filter out candidate data for decryption and query execution by examining query conditions against these encodings. We describe a generic framework consisting of three phases, namely, query preparation, query pre-processing and query execution, to implement the proposed search strategy.
- Published
- 2003
- Full Text
- View/download PDF
29. Preparing SCORM for the semantic web
- Author
-
Aroyo, L.M., Pokraev, S., Brussee, Rogier, meersman, R, Tari, Z, and Schmidt, D.C.
- Subjects
Knowledge representation and reasoning ,Computer science ,Business process ,business.industry ,METIS-306393 ,Interoperability ,Knowledge engineering ,Context (language use) ,computer.file_format ,Semantics ,IR-92525 ,World Wide Web ,Annotation ,SDG 4 – Kwaliteitsonderwijs ,ComputingMilieux_COMPUTERSANDEDUCATION ,The Internet ,RDF ,business ,Semantic Web ,computer ,SDG 4 - Quality Education - Abstract
In this paper we argue that the effort within the context of Semantic Web research, such as RDF and DAML-S. will allow for better knowledge representation and engineering of educational systems and easier integration of e-learning with other business processes. We also argue that existing educational standards, such as SCORM and LOM could be mapped to those technologies, providing for more efficient automation of processes like educational resource annotation, and intelligent accessibility management. In this way we can use a successful combination of the technical advances outside of educational context and the existing educational standards, and allow for easier interoperability. To illustrate these issues and a solution approach we present the OntoAIMS educational environment.
- Published
- 2003
- Full Text
- View/download PDF
30. Continuous Monitoring in Evolving Business Networks
- Author
-
Comuzzi, M., Vonk, J., Grefen, P.W.P.J., Meersman, R., Dillon, T., Meersman, R, Dillon, TS, Herrero, P, and Information Systems IE&IS
- Subjects
QA75 ,HF ,SIMPLE (military communications protocol) ,Process (engineering) ,business.industry ,Computer science ,Distributed computing ,Continuous monitoring ,Network formation ,Risk analysis (engineering) ,Business networking ,Information technology management ,Business activity monitoring ,Architecture ,business - Abstract
The literature on continuous monitoring of cross-organizational processes, executed within virtual enterprises or business networks, considers monitoring as an issue regarding the network formation, since what can be monitored during process execution is fixed when the network is established. In particular, the impact of evolving agreements in such networks on continuous monitoring is not considered. Also, monitoring is limited to process execution progress and simple process data. In this paper, we extend the possible monitoring options by linking monitoring requirements to generic clauses in agreements established across a network and focus on the problem of preserving the continuous monitorability of these clauses when the agreements evolve, i.e. they are introduced, dropped, or updated. We discuss mechanisms to preserve continuous monitorability in a business network for different types of agreement evolution and we design a conceptual and technical architecture for a continuous monitoring IT infrastructure that implements the requirements derived from such mechanisms.
- Published
- 2010
31. Compliance checking of data-aware and resource-aware compliance requirements
- Author
-
Ramezani Taghiabadi, E., Gromov, V., Fahland, D., Aalst, van der, W.M.P., Meersman, R., Panetto, H., Dillon, T., Missikoff, M., Liu, L., Pastor, O., Cuzzocrea, A., Sllis, T., and Process Science
- Subjects
Resource (project management) ,Risk analysis (engineering) ,Computer science ,Process (engineering) ,restrict ,Business process ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Audit ,Conformance checking ,Domain (software engineering) ,Compliance (psychology) - Abstract
Compliance checking is gaining importance as today’s organizations need to show that their business practices are in accordance with predefined (legal) requirements. Current compliance checking techniques are mostly focused on checking the control-flow perspective of business processes. This paper presents an approach for checking the compliance of observed process executions taking into account data, resources, and control-flow. Unlike the majority of conformance checking approaches we do not restrict the focus to the ordering of activities (i.e., control-flow). We show a collection of typical data and resource-aware compliance rules together with some domain specific rules. Moreover providing diagnostics and insight about the deviations is often neglected in current compliance checking techniques. We use control-flow and data-flow alignment to check compliance of processes and combine diagnostics obtained from both techniques to show deviations from prescribed behavior. Furthermore we also indicate the severity of observed deviations. This approach integrates with two existing approaches for control-flow and temporal compliance checking, allowing for multi-perspective diagnostic information in case of compliance violations. We have implemented our techniques and show their feasibility by checking compliance of synthetic and real life event logs with resource and data-aware compliance rules. Keywords: compliance checking; auditing; data-aware and resource-aware compliance requirements; conformance checking
- Published
- 2014
32. Decomposing alignment-based conformance checking of data-aware process models
- Author
-
Leoni, de, M., Munoz-Gama, J., Carmona, J., Aalst, van der, W.M.P., Meersman, R., Panetto, H., Dillon, T., Missikoff, M., Liu, L., Pastor, O., Cuzzocrea, A., Sllis, T., Process Science, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, and Universitat Politècnica de Catalunya. ALBCOM - Algorismia, Bioinformàtica, Complexitat i Mètodes Formals
- Subjects
Process modeling ,Correctness ,Computer science ,Event (computing) ,media_common.quotation_subject ,Informàtica::Sistemes d'informació [Àrees temàtiques de la UPC] ,Process mining ,Conformance checking ,Petri nets ,Petri net ,computer.software_genre ,Multi-perspective process modelling ,Business process discovery ,Petri, Xarxes de ,Quality (business) ,Informàtica::Intel·ligència artificial [Àrees temàtiques de la UPC] ,Data mining ,Mineria de dades ,computer ,Divide-and-conquer techniques ,media_common - Abstract
Process mining techniques relate observed behavior to modeled behavior, e.g., the automatic discovery of a Petri net based on an event log. Process mining is not limited to process discovery and also includes conformance checking. Conformance checking techniques are used for evaluating the quality of discovered process models and to diagnose deviations from some normative model (e.g., to check compliance). Existing conformance checking approaches typically focus on the control-flow, thus being unable to diagnose deviations concerning data. This paper proposes a technique to check the conformance of data-aware process models. We use so-called Petri nets with Data to model data variables, guards, and read/write actions. Data-aware conformance checking problem may be very time consuming and sometimes even intractable when there are many transitions and data variables. Therefore, we propose a technique to decompose large data-aware conformance checking problems into smaller problems that can be solved more efficiently. We provide a general correctness result showing that decomposition does not influence the outcome of conformance checking. The approach is supported through ProM plug-ins and experimental results show significant performance improvements. Experiments have also been conducted with a real-life case study, thus showing that the approach is also relevant in real business settings. Keywords: ProcessMining; Conformance Checking; Divide-and-Conquer Techniques; Multi-Perspective Process Modelling
- Published
- 2014
33. Referencing modes and ease of validation within fact-based modeling
- Author
-
Bollen, P.W.L., Meersman, R., Organisation,Strategy & Entrepreneurship, and RS: GSBE ERD
- Subjects
Engineering drawing ,Human–computer interaction ,Computer science ,Mode (statistics) ,Context (language use) ,Domain (software engineering) - Abstract
In this article we will evaluate the different referencing modes in fact-based modeling and we will investigate to what extent the application of the fact based modeling methodology in a specific domain context has an impact on the choice of a referencing mode and under what conditions one referencing mode is preferred over another.
- Published
- 2014
- Full Text
- View/download PDF
34. Two-level meta-controlled substitution grammars
- Author
-
Meersman, R. and Rozenberg, G.
- Published
- 1978
- Full Text
- View/download PDF
35. Configurable declare : designing customisable flexible models
- Author
-
Schunselaar, D.M.M., Maggi, F.M., Sidorova, N., Aalst, van der, W.M.P., Meersman, R., and Process Science
- Subjects
TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS - Abstract
Declarative languages are becoming more popular for modelling business processes with a high degree of variability. Unlike procedural languages, where the models define what is to be done, a declarative model specifies what behaviour is not allowed, using constraints on process events. In this paper, we study how to support configurability in such a declarative setting. We take Declare as an example of a declarative process modelling language and introduce Configurable Declare. Configurability is achieved by using configuration options for event hiding and constraint omission. We illustrate our approach using a case study, based on process models of ten Dutch municipalities. A Configurable Declare model is constructed supporting the variations within these municipalities.
- Published
- 2013
36. Looking into the future : using timed automata to provide a priori advice about timed declarative process models
- Author
-
Westergaard, M., Maggi, F.M., Meersman, R., and Process Science
- Abstract
Many processes are characterized by high variability, making traditional process modeling languages cumbersome or even impossible to be used for their description. This is especially true in cooperative environments relying heavily on human knowledge. Declarative languages, like Declare, alleviate this issue by not describing what to do step by step but by defining a set of constraints between actions that must not be violated during the process execution. Furthermore, in modern cooperative business, time is of utmost importance. Therefore, declarative process models should be able to take this dimension into consideration. Timed Declare has already previously been introduced to monitor temporal constraints at runtime, but it has until now only been possible to provide an alert when a constraint has already been violated without the possibility of foreseeing and avoiding such violations. Moreover, the existing definitions of Timed Declare do not support the static detection of time-wise inconsistencies. In this paper, we introduce an extended version of Timed Declare with a formal timed semantics for the entire language. The semantics degenerates to the untimed semantics in the expected way. We also introduce a translation to timed automata, which allows us to detect inconsistencies in models prior to execution and to early detect that a certain task is time sensitive. This means that either the task cannot be executed after a deadline (or before a latency), or that constraints are violated unless it is executed before (or after) a certain time. This makes it possible to use declarative process models to provide a priori guidance instead of just a posteriori detecting that an execution is invalid.
- Published
- 2013
37. The method of Garabedian: Some extensions
- Author
-
de Meersman, R.
- Published
- 1968
- Full Text
- View/download PDF
38. A Framework for Representation, Validation and Implementation of Database Application Semantics
- Author
-
van Keulen, Maurice, Skowronek, J., Apers, Peter M.G., Balsters, H., Blanken, Henk, de By, R.A., Flokstra, Jan, meersman, R, Mark, L., and Databases (Former)
- Subjects
Tools and techniques ,Database ,DB-OODB: OBJECT-ORIENTED DATABASES ,Semantics (computer science) ,Computer science ,Programming language ,computer.software_genre ,Requirements/specifications ,Languages ,Semantic representation ,Logical design ,Representation (mathematics) ,EWI-7673 ,computer - Abstract
New application domains in data-processing environments pose new requirements on the methodologies, techniques and tools used to design them. The applications’ semantics should be fully represented at an increasingly high level, and the representation should be subject to rigorous validation and verification. We present a semantic representation framework (including the language, methods and tools) for design of data-processing applications. The new features of the framework include a small number of precisely defined domain-independent concepts, high-level possibilities for describing behavioural semantics (methods and constraints) and the validation and verification tools included in the framework. We present examples of the use of the framework, including the use of its tools.
- Published
- 1995
- Full Text
- View/download PDF
39. An Uncertain Data Integration System
- Author
-
Ayat, N., Afsarmanesh, H., Akbarinia, R., Valduriez, P., Meersman, R., Panetto, H., Dillon, T., Rinderle-Ma, S., Dadam, P., Zhou, X., Pearson, S., Ferscha, A., Bergamaschi, S., Cruz, I.F., Informatics Institute [Amsterdam], University of Amsterdam [Amsterdam] (UvA), Scientific Data Management (ZENITH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Federated Collaborative Networks (IVI, FNWI), and Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Inria Sophia Antipolis - Méditerranée (CRISAM)
- Subjects
Matching (statistics) ,[INFO.INFO-DB]Computer Science [cs]/Databases [cs.DB] ,Uncertain data ,Computer science ,Ontology-based data integration ,Probabilistic logic ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Schema matching ,Data model ,020204 information systems ,Schema (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,schema matching ,functional dependency ,Data mining ,Functional dependency ,uncertain data integration ,computer ,data integration ,Data integration - Abstract
International audience; Data integration systems off er uniform access to a set of autonomous and heterogeneous data sources. An important task in setting up a data integration system is to match the attributes of the source schemas. In this paper, we propose a data integration system which uses the knowledge implied within functional dependencies for matching the source schemas. We build our system on a probabilistic data model to capture the uncertainty arising during the matching process. Our performance validation con rms the importance of functional dependencies and also using a probabilistic data model in improving the quality of schema matching. Our experimental results show significant performance gain compared to the baseline approaches. They also show that our system scales well.
- Published
- 2012
40. History-aware, real-time risk detection in business processes
- Author
-
Conforti, R., Fortino, G., La Rosa, M., Hofstede, ter, A.H.M., Meersman, R., Tharam, D., Herrero, P., et al., xx, and Information Systems IE&IS
- Abstract
This paper proposes a novel approach for identifying risks in executable business processes and detecting them at run-time. The approach considers risks in all phases of the business process management lifecycle, and is realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of faults to occur. Both historical and current process execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a process automation suite to prompt the results to the user who may take remedial actions. The proposed architecture has been implemented in the YAWL system and its performance has been evaluated in practice.
- Published
- 2011
41. Fact-Based Service Modeling in a Service Oriented Architecture
- Author
-
Bollen, P.W.L., Meersman, R, Dillon, T, Herrero, P, Organisation,Strategy & Entrepreneurship, RS: GSBE, and RS: GSBE ERD
- Subjects
Service (business) ,Knowledge management ,Process management ,Computer science ,Order (business) ,business.industry ,Business process ,computer.internet_protocol ,Service-oriented architecture ,Differentiated service ,business ,computer ,Outsourcing - Abstract
Service-oriented computing (SOC) allows organizations to tailor their business processes, in such a way that efficiency and effectiveness goals will be achieved by outsourcing (parts of) business processes to external (web-based) service-providers. In order to find the computing service-providers that provide the organizations with the biggest benefits, it is paramount that the service-requesting organization (SRO) has a precise description of the service it wants to have delivered by the service delivering organization (SDO). In this paper we will illustrate how enterprises that play the SDO and SRO roles can be conceptually integrated by creating conceptual models that share the definitions of the business processes within the service oriented architecture (SOA) framework.
- Published
- 2011
42. Empowering Enterprise Data Governance with Business Semantics Glossary
- Author
-
Debruyne, C., De Leenheer, P.G.M., Meersman, R., Software and Sustainability (S2), Business Web and Media, Network Institute, and Software & Services
- Published
- 2011
43. Fragment-based version management for repositories of business process models
- Author
-
Ekanayake, C.C., La Rosa, M., Hofstede, ter, A.H.M., Fauvet, M.C., Meersman, R., Dillon, T., Herrero, P., et al., xx, and Information Systems IE&IS
- Abstract
As organizations reach higher levels of Business Process Management maturity, they tend to accumulate large collections of processmodels. These repositories may contain thousands of activities and be managed by different stakeholders with varying skills and responsibilities. However, while being of great value, these repositories induce high management costs. Thus, it becomes essential to keep track of the various model versions as they may mutually overlap, supersede one another and evolve over time. We propose an innovative versioning model, and associated storage structure, specifically designed to maximize sharing across process models and process model versions, reduce conflicts in concurrent edits and automatically handle controlled change propagation. The focal point of this technique is to version single process model fragments, rather than entire process models. Indeed empirical evidence shows that real-life process model repositories have numerous duplicate fragments. Experiments on two industrial datasets confirm the usefulness of our technique.
- Published
- 2011
44. Business process scheduling with resource availability constraints
- Author
-
Xu, Jiajie, Liu, Chengfei, Zhao, X., Yongchareon, S., Meersman, R., Dillon, T., and Information Systems IE&IS
- Subjects
Knowledge management ,Operations research ,Computer science ,Genetic algorithm scheduling ,Event-driven process chain ,business.industry ,Business process ,Scheduling (production processes) ,Resource allocation ,Resource management ,business ,Scheduling (computing) - Abstract
Resources tend to follow certain availability patterns, due to the maintenance cycles, work shifts, etc. Such availability patterns heavily influence the efficiency and effectiveness of enterprise process scheduling. Most existing process scheduling and resource management approaches focus on process structure and resource utilisation, yet neglect the resource availability constraints. In this paper, we investigate how to plan the business process instances scheduling in accordance with resource availability patterns, so that enterprise resources can be rationally and sufficiently used. Three planning strategies are proposed to maximise the process instance throughput using different criteria.
- Published
- 2010
- Full Text
- View/download PDF
45. Business protocol adaptation for flexible chain management
- Author
-
Seguel Pérez, R.E., Eshuis, H., Grefen, P.W.P.J., Meersman, R., Dillon, T., Herrero, P., and Information Systems IE&IS
- Subjects
Service (business) ,Business requirements ,Process management ,Supply chain management ,Computer science ,business.industry ,Artifact-centric business process model ,Business architecture ,Business service provider ,business ,Protocol (object-oriented programming) ,Outsourcing - Abstract
Nowadays, organizations collaborate in business chains using dynamic service outsourcing to deliver complex products and services. To enable the flexible formation of business chains, organizations need to ensure that their business protocols are compatible. If the business protocols are incompatible, then the organizations cannot form a business chain. Protocol adaptors can resolve incompatibilities between business protocols during chain formation. By using the customer order decoupling point, we identify three different business chain structures. For each chain structure, we identify how adaptation can be used to support flexible chain formation.
- Published
- 2010
- Full Text
- View/download PDF
46. Efficient and accurate retrieval of business process models through indexing
- Author
-
Jin, T., Wang, J., Wu, N., La Rosa, M., Hofstede, ter, A.H.M., Meersman, R., Dillon, T., Dillon, xx, Herrero, P., and Information Systems IE&IS
- Subjects
Process modeling ,Database ,Artifact-centric business process model ,Process (engineering) ,Computer science ,business.industry ,Process mining ,Business process modeling ,computer.software_genre ,Data science ,Business process discovery ,Business process management ,Query expansion ,business ,computer - Abstract
Recent years have seen an increased uptake of business process management technology in various industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.
- Published
- 2010
47. A Fact-Based Meta Model for Standardization Documents
- Author
-
Bollen, P.W.L., Meersman, R., Dillon, T., Herrero, P., Organisation,Strategy & Entrepreneurship, and RS: GSBE ERD
- Subjects
Business Process Model and Notation ,Database ,Standardization ,business.industry ,Computer science ,Completeness (order theory) ,Standard (document) ,Software engineering ,business ,computer.software_genre ,computer ,Metamodeling - Abstract
Recently, the OMG has been working on developing a new standard for a Business Process Model and Notation (BPMN). This standard development has resulted in documents that contain the latest approved version of a standard or a standard proposal that can be ammended. Such a standard document also serves as a specification for BPMN modeling tool. In this paper we show how a fact-based approach can improve the completeness and maintenance of such a specification.
- Published
- 2010
48. Real-Time Integration of Geo-data in ORM
- Author
-
Balsters, Herman, Klaver, Chris, Huitema, George B., Meersman, R, Dillon, T, and Herrero, P
- Subjects
InformationSystems_DATABASEMANAGEMENT - Abstract
Geographic information (geo-data; i.e., data with a spatial component.) is being used for civil, political, and commercial applications. Modeling geo-data can be involved due to its often very complex structure, hence placing high demands on the modeling language employed. Many geo-applications would greatly benefit from the possibility of integrating existing geo-databases. Data integration is a notoriously hard problem, and integrating geo-databases in practice often adds the extra requirement that the integration should result in a real-time system. This paper provides a case study and a design method for real-time integration of geo-databases based on the ORM modeling language. We will show that the use of ORM is superior to competing approaches, and that the so-called ORM federation procedure will yield correct design of integrated geo-databases.
- Published
- 2010
49. Configurable services in the cloud : supporting variability while enabling
- Author
-
Aalst, van der, W.M.P., Meersman, R., Dillon, T., Herrero, P., Information Systems IE&IS, and Process Science
- Abstract
The Software as a Service (SaaS) paradigm is particularly interesting for situations where many organizations need to support similar processes. For example, municipalities, courts, rental agencies, etc. support highly similar processes. However, despite these similarities, there is also the need to allow for local variations in a controlled manner. Therefore, cloud infrastructures should provide configurable services such that products and processes can be customized while sharing commonalities. Configurable and executable process models are essential to realize such infrastructures. This will finally transform reference models from "paper tigers" (reference modeling à la SAP, ARIS, etc.) into an "executable reality". Moreover, "configurable services in the cloud" enable cross-organizational process mining. This way, organizations can learn from each other and improve their processes.
- Published
- 2010
50. Process mining towards semantics
- Author
-
Alves De Medeiros, A.K., Aalst, van der, W.M.P., Dillon, T.S., Chang, E., Meersman, R., Sycara, K., Information Systems IE&IS, and Process Science
- Subjects
Business process discovery ,Process modeling ,Business process ,Computer science ,Business rule ,Event (computing) ,Process mining ,Semantics ,Semantic data model ,Data science - Abstract
Process mining techniques target the automatic discovery of information about process models in organizations. The discovery is based on the execution data registered in event logs . Current techniques support a variety of practical analysis, but they are somewhat limited because the labels in the log are not linked to any concepts. Thus, in this chapter we show how the analysis provided by current techniques can be improved by including semantic data in event logs. Our explanation is divided into two main parts. The first part illustrates the power of current process mining techniques by showing how to use the open source process mining tool ProM to answer concrete questions that managers typically have about business processes. The second part utilizes usage scenarios to motivate how process mining techniques could benefit from semantic annotated event logs and defines a concrete semantic log format for ProM. The ProM tool is available at www.processmining.org.
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.