19 results on '"Zhao, Liping"'
Search Results
2. Three essays on financial economics
- Author
-
Zhao, Liping, primary
- Full Text
- View/download PDF
3. Instant messaging-based networked service provisioning and access framework
- Author
-
Zhao, Liping, primary
- Full Text
- View/download PDF
4. Understanding blockchain applications from architectural and business process perspectives
- Author
-
Alzhrani, Fouzia and Zhao, Liping
- Subjects
Blockchain ,Software patterns ,Framework ,Process mining ,Business process modeling ,Architectural patterns - Abstract
Blockchain is a promising cross-industry technology. With the rapid evolution of the technology, academia and industry are exploring the applicability of blockchain in various domains, including healthcare, supply chain management, and Internet of Things. This technology, with its characteristics of decentralization, anonymity, persistency, and auditability, delivers a new way to enforce trust among distrusted business partners. It combines cryptography, peer to peer networking, data management, consensus protocols and incentive mechanisms to support optimal execution of transactions between involved parties. Blockchain applications are complex, heterogeneous, and require cooperation and interoperation with non-blockchain systems. Their complexity is further exacerbated by the lack of a clear understanding of their composition, as well as the stringent demand on functional and non-functional requirements. This thesis aims to address these shortfalls and is set out to gain an understanding of blockchain applications from architectural and business process perspectives. This understanding is elaborated through several relevant, yet independent, research contributions: a taxonomy, software patterns and pattern languages, and a process- aware framework design and implementation. These artifacts are supported by comprehensive datasets of Industry-developed and Academia-researched blockchain applications, as well as a set of event logs related to these applications. Several research methodologies were adopted to produce the contributions, including literature review, software decomposition, domain analysis, and automated business process discovery. The research was validated through a mixed method approach which proofs that such understanding can better inform software architects and developers in their analysis, design and implementation of blockchain applications.
- Published
- 2023
5. A fully online approach for anomaly detection and change-point detection in streaming data using LSTM
- Author
-
Khanam, Memoona, Zhao, Liping, and Shapiro, Jonathan
- Subjects
Backpropagation through time ,Mutivariate Normal Distribution ,Long short-term memory network ,Online anomaly detection ,Online change-point detection - Abstract
In this thesis, we propose a novel online anomaly detection and change detection algorithm based on contemporary Recurrent Neural Networks such as Long Short-Term Memory (LSTM) for time series anomaly detection in a changing environment posed by either transitory change (point anomaly) or permanent change (change-point) in normal behaviour in streaming data. The proposed online anomaly and change-point detection model is trained incrementally as new data stream becomes available and is capable of adapting to the changes in the data distribution of underlying pattern. LSTM is used to make single or multi-step predictions of the time series which could be from 1 step up to 10 steps ahead, and the prediction errors are used to detect anomalies and change-points as well as update the model. Fundamentally, the proposed algorithm is developing the online model of prediction error such that the large prediction error is employed to indicate the anomalous behaviour in changing environment. Additionally, the prediction errors are used to update the proposed online model in such a way that transitory anomalies do not lead to a radical change in the model. Whereas high computed prediction errors over a period of time due to permanent changes/change-points lead to substantial updates in model. The model automatically and swiftly adapts to the changing statistics and new custom of input data distribution. The proposed online anomaly detection and change-point detection technique is striving for fully on-line performance; therefore, model will not assume any labels in data stream except during evaluation. Furthermore, our novel proposed online anomaly and change detection model is not relying on any user define parameters (such as threshold) but automatically defined and updated by model itself. We validate the efficiency of the proposed novel model-based parametric unsupervised online approach through experiments on publicly accessible and proprietary three real-world and four synthetic benchmark datasets, which are taken and derived from Yahoo Labs Benchmark Dataset (Yahoo Webscope) and Numenta Anomaly Benchmark (NAB) that contains labelled anomalies. We compare the results in term of Area Under Curve (AUC) with state of-the-art algorithm available in literature. We perceive that proposed online anomaly and change-point detection model perform reliably better for multistep predictions as compared to models with single step predictions for all data sets in terms of AUC. We also observe that model trained for predictions of multi-step such as 5 or 10 steps ahead are more competent to detect the temporal changes in the target output distribution and are consequently able to detect changes point to and adapt to the changes much early as compared to model trained to predict only with one-step. Where step is the number of future predictions of the model. Unlike other methods our method has the advantage that it not only detects the anomalies as quickly as possible but did not let the anomalies to make drastic change in distribution. Whereas in case of short-term changes/point anomalies the proposed online model suggests that there is a trade-off between how early the anomaly detection happened and how long do the false positives last is depending on prediction length. It is concluded that our proposed online anomaly and change-point detection model consistently outperform and give the full advantage when it utilizes for multistep time series predictions.
- Published
- 2023
6. Using machine learning algorithms for classifying non-functional requirements : research and evaluation
- Author
-
Binkhonain, Manal, Clinch, Sarah, and Zhao, Liping
- Subjects
Requirements classification ,text classification ,machine learning ,requirements engineering - Abstract
Requirements classification, the process of assigning requirements to classes, is essential to requirements engineering, as it serves to define and organize the requirements for application systems, to determine the boundaries of the systems, to establish the relationships among the requirements, and to ensure the correct kinds of functionality are implemented in the systems. As most requirements are written in natural language, the manual classification of textual requirements can be time-consuming and error-prone. Aiming to reduce the burden on the human analyst, the machine-learning (ML) approach has been used since the early 2000s for automatic requirements classification. The ML approach faces three problems in non-functional requirements (NFRs) classification: imbalanced classes, short text, and the high dimensionality of feature space. Although these problems are widely addressed in various classification tasks, they are less frequently considered in requirements classification. In this thesis, we present two ML methods for automatically classifying NFRs. The main novelty of these methods lies in applying techniques that address the classification problems mentioned earlier. The first method integrates three techniques - dataset decomposition, semantic role-based feature selection, and feature extension - to address the three problems. The second method addresses short-text classification by adding the most similar requirements (i.e., the requirement extension technique). Both methods were evaluated on a publicly available NFRs dataset. The results of each method are compared with related methods, baseline methods, and state-of-the-art solutions to the problems. The results demonstrate the usefulness of addressing problems with NFR classifications and the effectiveness of the proposed methods, suggesting that these solutions could improve different requirements classification tasks. To assess the generalization of the results of the proposed methods, we present a case study on the use of ML methods in sub-class NFRs classification. In particular, we reapply the proposed methods for classifying usability requirements according to usability aspects. This study includes the identification of the most common aspects of usability by systematically reviewing existing usability models. It also includes building usability requirements datasets. The results of applying ML methods in classifying usability requirements are similar to those provided by NFRs, confirming the usefulness of addressing problems with requirements classification.
- Published
- 2021
7. An automated system for identification of useful user reviews for mobile application development
- Author
-
Tavakoli, Mohammadali, Zhao, Liping, and Nenadic, Goran
- Subjects
User feedback ,App reviews ,Requirements engineering ,NLP ,Application development ,App store mining ,Neural Networks - Abstract
In recent years, mobile app reviews are known to provide a rich source of user feedback which is of great value for software evolution. However, the volume of such user reviews is huge, particularly for famous applications and large companies offering several applications. Addressing this issue, several automatic approaches are proposed recently for identifying useful reviews. The applied criteria for measuring the review usefulness in these approaches are originated from the few existing exploratory studies, wherein the usefulness of a review is interpreted as inclusion of requirement engineering related topics. Such interpretations of usefulness, however, is based on authors' understanding of usefulness rather than developers' requirements. Ignoring developers' viewpoint, the authors defined some usefulness metrics based on their own observations, and developed extraction approaches accordingly. Thus, expecting interesting results from such approaches for developers dealing with thousands of reviews daily is awkward. To bridge this gap in this study, related studies across several domains analysing human generated feedback, such as reviews, tweets, requirement notes, bug reports, and application testing reports, were perused to define a set of factors for accurately measuring the usefulness of user reviews. The usefulness factors were, then, validated in a focus group discussion session by experienced mobile app developers. Next, the task of extracting each of the approved factors was automated applying Deep Learning and Natural Language Processing (NLP) techniques. Finally, the models designed for extracting each factor were integrated to form a final system for automatically extracting useful reviews. Testing on different review datasets, the novel system achieved high accuracy (i.e., Aspects: 87%, Feature Requests: 72%, Issues: 67%, User Actions: 73%, and System Actions: 81%) and outperformed state-of-the-art extraction techniques. Moreover, unlike the state-of-the-art, the proposed system is completely aligned with developers' viewpoint as it emphasises on developers' approved factors for measuring the usefulness.
- Published
- 2021
8. A deep learning based approach to sketch recognition and model transformation for requirements elicitation and modelling
- Author
-
Olatunji, Oluwatoni, Zhao, Liping, and Lau, Kung-Kiu
- Subjects
Sketch Recognition ,Convolutional Neural Network ,Sketch ,Requirements Engineering ,Requirements Elicitation ,Model Transformation ,Requirements Modelling ,Domain Specific Language - Abstract
Requirements Engineering (RE) is the process of discovering and defining user requirements for developing software systems. The early phase of this process is concerned with eliciting and analysing requirements. Modelling, the activity of constructing abstract descriptions that are amenable to communication between different stake-holders plays a critical role in requirements elicitation and analysis. However, current modelling tools are based on formal notations such as UML Diagrams and i* Diagram that do not support the use of hand-drawn diagrams or a mix of hand-drawn and computer drawn diagram to draw initial requirements models and subsequently transform the drawn models into target software models. The research presented in this thesis aims to address this problem. It aims to achieve two related objectives: 1) to develop a sketch tool, iSketch, that would enable users to draw use case diagram using either hand-drawn diagram or a mix of hand-drawn and computer drawn diagram. and 2) to support the transformation of the drawn use case diagram into initial software models represented by UML Class Diagram and UML Sequence Diagram. Central to these research objectives are the development of novel sketch recognition and model transformation techniques for iSketch. To support sketch recognition, we have developed a deep learning technique that uses colour inversion to classify and improve the recognition rate of iSketch models. To support model transformation, we have developed a semantic modelling approach that works by first translating iSketch models into intermediate Agent-Oriented Models and finally into initial software models. iSketch was evaluated in two ways. First, validation of iSketch through 2 experiments to measure the performance of iSketch in sketch recognition and model transformation using stroke labelling, and f-score metrics, respectively. In sketch recognition, iSketch achieved a recognition accuracy of 89.91% and 97.29% without and with colour inversion respectively when tested on iSketch dataset. In model transformation, iSketch achieved an f-score of 91.22% and 60.88% in generating UML Sequence and Class Diagrams respectively from iSketch models. Second, iSketch was compared with 15 related approaches. The result showed that only iSketch supports an automatic generation of initial software models from hand-drawn requirements models.
- Published
- 2021
9. Using semantic frames for measuring and identifying semantic relationships in software descriptions
- Author
-
Alhoshan, Waad, Zhao, Liping, and Batista-Navarro, Riza Theresa
- Subjects
005.1 ,Requirements Engineering ,Software Requirements ,Natural Requirements ,Requirement Document ,Software Description ,RE ,Semantic Relatedness ,NLP ,Natural Language Processing ,FrameNet ,Semantic Relationships - Abstract
As most software requirements are written in natural language, they are unstructured and do not adhere to any formalism. Therefore, automatically processing these requirements in the context of Requirement Engineering (RE) is often difficult, complex and opaque. The problems fall under the remit of linguistic issues, such as ambiguity and incompleteness. Techniques and resources from Natural Language Processing (NLP) have been used for exploring natural language issues in unstructured requirement documents. The research in this hybrid area of RE studies has covered various tasks, including analysing, modelling and organising requirements, which generally referred as NLP for RE (or simply NLP4RE) research tasks. An essential linguistic process that is common in most of NLP4RE tasks is the process of identifying relationships between requirement statements, i.e., detecting semantic relatedness and similarity within a requirement document as a collection of software descriptions. By detecting such a complex and (mostly) hidden relationship in the natural description of requirements, we will end with more accurate and robust NLP4RE tools that could handle the lack of formalism in unstructured requirement documents. For example, to enable traceability between an arbitrary set of natural documents by linking their shared or common semantic relationships i.e. to trace requirements with specific concepts such as requirements that explain sending/receiving operations, verifying user credential for security purposes and more. This PhD thesis explores the potential of and adopts the semantic frames, embodied in the FrameNet lexicon, to provide unique insights and novel approaches (accompanied with several methods implemented into systems) for measuring and identifying semantic relationships in software descriptions expressed through unstructured, natural language. We follow a research methodology consists of collecting evidence of FrameNet's feasibility in RE, experimenting with various FrameNet-based solutions and critically appraising these solutions using real-world requirement documents. The first approach -- the knowledge-based approach -- is implemented based on the knowledge available in the FrameNet lexicon, through which we experiment with the various semantic similarity metrics used with different ontologies and lexica in FrameNet. The second approach -- the corpus-supported approach -- adapts FrameNet tagged corpora, one of which is the result of the earlier research method studying FrameNet's coverage of requirement documents. The corpus-supported approach utilises corpora features, such as frame frequencies and co-occurrences, to measure the relatedness between frames from the RE use context. The third and final approach -- the embedding-based approach -- is based on trained word embeddings for the RE domain. Thus, we propose new resources, i.e., embedding-based representations of semantic frames in FrameNet. We obtain motivational results from the corpus-based analysis, which has been conducted to study FrameNet's appropriateness for labelling software descriptions. Thus, this research creates the first RE corpus, consisting of 5,348 requirement statements, that is fully annotated with FrameNet frames. Afterwards, the proposed approaches to measure semantic frames' relatedness are evaluated based on their designated task -- identifying related semantic frames from the FrameNet while considering the RE context. The intrinsic evaluation is compared with a human-judgment dataset of frame-to-frame relationships. As a result, the embedding-based approach achieves more than a satisfactory overall performance rate in measuring and identifying semantic relationships between FrameNet frames from an RE perspective. For the extrinsic evaluation, we use the embedding-based approach in a requirement measurement technique to identify semantic relationships between natural language requirement statements. A satisfactory performance rate is obtained compared to lexically-founded baseline systems and the human-judgment dataset. The encouraging results of the embedding-based approach prove the adequacy of using encapsulated contextual information (represented by semantic frames) to trace requirements' relatedness.
- Published
- 2020
10. Software support for quantitative near-infrared analysis and benchmarking of chemometric methods : a case study on single kernel samples
- Author
-
Hu, Shupeng, Zeng, Xiaojun, and Zhao, Liping
- Subjects
005.1 ,Variable Selection ,Multivariate Calibration ,Benchmarking of Chemometric Methods ,IDEF0 Functional Modelling ,Quantitative Analysis, Pre-processing ,Single Kernel Near-Infrared Spectroscopy (SKNIRS) ,Chemometrics ,Near-Infrared Spectroscopy (NIRS) ,Software System Development - Abstract
During the past decades, the technology of the Near-Infrared Spectroscopy (NIRS) has been widely adopted as a non-destructive analytical tool in various fields. In agriculture and chemometrics, NIRS analysis at single kernel level can improve not only the sample uniformity and purity but also the quality and economic value of a seed batch. However, many limitations and challenges exist in the single kernel Near-Infrared Spectroscopy (SKNIRS) analysis and applications. The first contribution of this PhD thesis was to develop an integrated software system to support data collection, processing and analysis for single kernel sample. IDEF0 Functional Modelling has been used to guide the development and implementation of the integrated software system. Two real-world applications supported by the integrated software system were reported as the validation of the integrated software system. Another contribution of this PhD thesis was to provide a benchmark of chemometric methods for comparative single kernel near-infrared spectroscopy analysis based on the proposed stepwise process. Sixteen methods including two dataset partition methods, three pre-processing methods, eight variable selection methods and three calibration methods were assessed and compared based on two statistics: root mean squared error of prediction (RMSEP) and coefficient of determination (R2). Conclusions were discussed in detail based on the results of benchmarking analysis, which is appropriately general and may assist the choice of chemometric methods for SKNIRS.
- Published
- 2020
11. An algebraic service composition model for the construction of large-scale IoT systems
- Author
-
Arellanes Molina, Damian, Zhao, Liping, and Lau, Kung-Kiu
- Subjects
004.67 ,Dataflows ,Functional Scalability ,Decentralised Data Flows ,Workflow Variability ,Choreography ,Workflows ,Service Composition ,DX-MAN ,Large-Scale Systems ,Internet of Things (IoT) ,Orchestration - Abstract
The Internet of Things (IoT) is an emerging paradigm that envisions the interconnection of (physical and virtual) objects through innovative distributed services. With the advancement of hardware technologies, the number of IoT services is rapidly growing due to the increasing number of connected things. Currently, there are about 19 billion connected things, and it is predicted that this number will grow exponentially in the coming years. The scale of IoT systems will hence surpass human expectations as such systems will require the composition of billions of services into complex behaviours. Thus, scalability in terms of the size of IoT systems becomes a significant challenge. Existing service composition mechanisms (i.e., orchestration, choreography and dataflows) were primarily designed for the integration of enterprise services, not for the physical world. For that reason, they do not provide the requisite semantics and hence properties for tackling the scalability challenge that future IoT systems pose. In this thesis, we identify crucial scalability requirements for IoT systems, and propose an algebraic service composition model for the construction of large-scale IoT systems. The resulting model, DX-MAN, has been validated with a software platform and evaluated against the scalability requirements. A comparison with the related work shows that DX-MAN advances the state of the art on IoT service composition and it is, therefore, promising for the construction of future large-scale IoT software systems.
- Published
- 2020
12. Feature-oriented component-based development of whole software product families using enumerative variability
- Author
-
Qian, Chen, Zhao, Liping, and Lau, Kung-Kiu
- Subjects
005.1 ,product family engineering ,enumerative variability ,family-based testing - Abstract
A software product family is a cluster of related software systems that are used for similar purposes. In order to construct a software product family, product line engineering (PLE) implements the domain artefacts piece by piece, and builds an `assembly line' that is capable to assemble the artefacts for a product based on its configuration. As a result, there is no executable product to be tested before the final generation. Therefore, a product line with non-trivial size is impossible to be tested due to the everlasting problem of combinatorial explosion of variants. Our approach offers a new possibility to construct a product family by constructing a family architecture that captures and realises all the commonalities and variabilities in a feature-oriented component-based manner. As a proof of concept, we implement our approach by developing a web-based tool, which can visualise the process of family construction and automatically generate any number of family members afterwards. Moreover, in this thesis, we will present the major advantage our approach can bring, which is the support of family-based testing. Finally, we evaluate our work by comparison with existing PLE approaches to show some extra potential advantages of our approach, such as scalability, maintainability and evolvability.
- Published
- 2019
13. Evolution of a heterogeneous hybrid extreme learning machine
- Author
-
Christou, Vasileios, Zhao, Liping, and Brown, Gavin
- Subjects
004 ,Hybrid extreme learning machine ,Genetic algorithm ,Regression problem ,Classification problem ,Artificial neural network ,Custom neuron - Abstract
Hybrid optimization algorithms have gained popularity as it has become apparent there cannot be a universal optimization strategy which is globally more beneficial than any other. Despite their popularity, hybridization frameworks require more detailed categorization regarding: the nature of the problem domain, the constituent algorithms, the coupling schema and the intended area of application. This thesis proposes a hybrid algorithm named heterogeneous hybrid extreme learning machine (He-HyELM) for finding the optimal multi-layer perceptron (MLP) with one hidden layer that solves a specific problem. This is achieved by combining the extreme learning machine (ELM) training algorithm with an evolutionary computing (EC) algorithm. The research process is complemented by a series of preliminary experiments prior to hybridization that explore in depth the characteristics of the ELM algorithm. He-HyELM uses a pool of custom created neurons which are then embedded in a series of ELM trained MLPs. A genetic algorithm (GA) evolves these homogeneous networks into heterogeneous networks according to a fitness criterion. The GA utilizes a proposed intelligent novel crossover operator which uses a mechanism to rank each hidden layer node with purpose to guide the evolution process. Having analysed the proposed He-HyELM algorithm in Chapter 5, an enhanced version of the proposed algorithm is presented in Chapter 6. This enhanced version makes the mutation operator self-adaptive with aim to reduce the number of parameters need tuning. Both He-HyELM and SA-He-HyELM approaches are tested in three regression and three classification real-world datasets with purpose to test their performance. These experiments showed that both versions improved generalization when compared with the best homogeneous network found during the ELM empirical study in Chapter 3. Finally, in Chapter 7 we summarize the key findings and contributions of this work.
- Published
- 2019
14. Structural analysis of Arabic tweets
- Author
-
Albogamy, Fahad and Zhao, Liping
- Subjects
006.3 ,Parsing ,Treebank ,POS tagging ,Arabic tweets ,NLP - Abstract
This thesis explores the task of analysing the linguistic structure of Arabic tweets. Arabic tweets raise many challenges that make Natural Language Processing (NLP) tasks difficult. We are faced with the same linguistic issues that any ordinary language has as well as more genre-specific problems. Tweets are difficult to manipulate because they do not always maintain formal grammar and correct spelling, and abbreviations are often used to overcome length restrictions. Arabic tweets also exhibit linguistic phenomena such as usage of different dialects, Romanised Arabic and borrowing of foreign words. All these characteristics of the microblogging genre make NLP tasks on Twitter very different from their counterparts in more formal texts. Within most NLP systems there are several early stages such as tagging, stemming and parsing that may need to be redesigned to take into account characteristics of tweets in order to be able to extract their important linguistic features. To fulfil this need, three of the most fundamental parts of the linguistic pipeline, namely POS tagging, stemming and parsing have been revisited for Arabic tweets. To the best of our knowledge, this is the first attempt to carry out this task for Arabic tweets. We investigate the challenges of processing Arabic tweets, studying a number of standard Arabic processing tools and highlighting their limitations when manipulating Arabic tweets. We make three state-of-the-art POS taggers for Modern Standard Arabic (MSA) robust towards noise when applied to the Arabic tweets. We develop the first fast and robust POS tagger for Arabic tweets and create the first POS-tagged corpus of Arabic tweets. Also, we develop two approaches to stemming Arabic tweet words: a heavy stemmer and a light stemmer, and we find that the light stemmer provides the most suitable approach for stemming Arabic tweets words because it does not use dictionaries, is fast, and yields greater accuracy compared with the heavy stemmer and MSA stemmers. We are able to automatically create the first dependency treebank from unlabelled tweets by using two approaches: using a rule-based parser only and using a rule-based parser and a data-driven parser in a bootstrapping technique. Then, we train a data-driven parsing base model on them to parse Arabic tweets. The findings are encouraging. We are able to improve POS tagging accuracy from 49% to 74.0% on Arabic tweets. Experimental results show that the light stemmer achieves 77.9% accuracy. It outperforms three well-known stemmers for Arabic. Our parser reaches 71.0% accuracy which is better than the performance of French parsing for social media data and it is not far behind English parsing for tweets.
- Published
- 2018
15. Reverse engineering encapsulated components from legacy code
- Author
-
Arshad, Rehman, Zhao, Liping, and Lau, Kung-Kiu
- Subjects
004 ,Reverse Engineering ,Legacy Code ,Component based development - Abstract
Component-based development is an approach that revolves around the construction of systems form pre-built modular units (components). If legacy code can be reverse engineered to extract components, the extracted components can provide architectural re-usability across multiple systems of the same domain. Current component directed reverse engineering approaches are based on component models that belong to architecture description languages (ADLs). ADL-based components cannot be reused without configurational changes at code level and binding every required and provided service. Moreover, these component models neither support code-independent composition after extraction of components nor the re-deposition of a composed configuration of components for future reuse. This thesis presents a reverse engineering approach that extracts components and addresses the limitations of current approaches, together with a tool called RX-MAN. Unlike ADL-based approaches, the presented approach is based on an encapsulated component model called X-MAN. X-MAN components are encapsulated because computation cannot go outside of a component. X-MAN components cannot interact directly but only exogenously (composition is defined outside of a component). Our approach offers code-independent composition after extracting components and does not need binding of all the services like ADLs. The evaluation of our approach shows that it can facilitate the re-usability of legacy code by providing code-independent composition and re-deposition of composed configurations of components for further reuse and composition.
- Published
- 2018
16. TRAM : transforming textual requirements to support the earliest stage of model driven development
- Author
-
Letsholo, Keletso and Zhao, Liping
- Subjects
005.1 ,Semantic Object Models ,Analysis Models ,Natural Language Processing ,Requirements Transformation ,Model Transformation ,Requirements Traceability - Abstract
Tool support for automatically constructing analysis models from the natural language specification of requirements (NLR) is critical to Model-Driven Development (MDD), as it can bring forward the use of precise formal languages from the coding to the specification phase in the MDD life-cycle. However, there has been a lack of tools for automatically constructing initial software models (i.e., analysis models) from NLRs. The MDD process assumes that an analyst creates the initial software models manually. Consequently, the traceability links between the requirements specification, and the software created according to this specification are not explicitly represented. Unfortunately, current MDD technologies have failed to recognise this intrinsic relationship between requirements traceability, requirements transformation and model transformation. The aim of this research is to develop a novel MDD approach for automatically constructing analysis models from unstructured NL requirements to support the earliest phase of MDD and requirements traceability. The proposed approach makes requirements traceability an integral part of model construction and transformation, a feature not adequately supported by existing NL-based transformation approaches. In addition, a human enabled model validation approach is proposed, and used to check whether the knowledge possessed by domain experts is correctly and comprehensively represented in the models constructed by the proposed approach. The results obtained are encouraging and demonstrate that the proposed approach can be of assistance the earliest stage of MDD.
- Published
- 2015
17. Automatic construction of conceptual models to support early stages of software development : a semantic object model approach
- Author
-
Chioasca, Erol-Valeriu and Zhao, Liping
- Subjects
005.1 ,semantic object models ,conceptual models - Abstract
The earliest stage of software development almost always involves converting requirements descriptions written in natural language (NLRs) into initial conceptual models, represented by some formal notation. This stage is time-consuming and demanding, as initial models are often constructed manually, requiring human modellers to have appropriate modelling knowledge and skills. Furthermore, this stage is critical, as errors made in initial models are costly to correct if left undetected until the later stages. Consequently, the need for automated tool support is desirable at this stage. There are many approaches that support the modelling process in the early stages of software development. The majority of approaches employ linguistic-driven analysis to extract essential information from input NLRs in order to create different types of conceptual models. However, the main difficulty to overcome is the ambiguous and incomplete nature of NLRs. Semantic-driven approaches have the potential to address the difficulties of NLRs, however, the current state of the art methods have not been designed to address the incomplete nature of NLRs. This thesis presents a semantic-driven automatic model construction approach which addresses the limitations of current semantic-driven NLR transformation approaches. Central to this approach is a set of primitive conceptual patterns called Semantic Object Models (SOMs), which superimpose a layer of semantics and structure on top of NLRs. These patterns serve as intermediate models to bridge the gap between NLRs and their initial conceptual models. The proposed approach first translates a given NLR into a set of individual SOM instances (SOMi) and then composes them into a knowledge representation network called Semantic Object Network (SON). The proposed approach is embodied in a software tool called TRAM. The validation results show that the proposed semantic-driven approach aids users in creating improved conceptual models. Moreover, practical evaluation of TRAM indicates that the proposed approach performs better than its peers and has the potential for use in real world software development.
- Published
- 2015
18. QRMF : a multi-perspective framework for quality requirements modelling
- Author
-
Saeedi, Kawther Abdulelah, Zhao, Liping, and Sampaio, Pedro
- Subjects
005.1 ,Quality Requirement Modelling ,Requirement Modelling ,Multi-Perspective Requirement Modelling ,Requirement Modelling Views ,Requirement Modelling Process ,Requirement Meta-Models ,Quality Requirement description - Abstract
In recent years, a considerable amount of research has been conducted in modelling non-functional requirements (NFR) or Quality Requirements (QR). However, in comparison with functional requirements (FR) modelling, QR models are still immature and have not been widely adopted. The fundamental reason for this shortfall outlined in this thesis is that the existing QR modelling approaches have not adequately considered the challenging nature of QRs. In this thesis, this limitation is addressed through integrating QR modelling with FR modelling in a multi-perspective modelling framework. This framework, thus called QRMF (Quality Requirements Modelling Framework), is developed offering a process-oriented approach to modelling QR from different views and at different phases of requirement. These models are brought together in a descriptive representation schema, which represents a logical structure to guide the construction of requirement models comprehensively and with consistency. The research presented in the thesis introduces a generic meta-meta model for QRMF to aid understanding the abstract concepts and further guide the modelling process; it offers a reference blueprint to develop a modelling tool applicable to the framework. QRMF is supported by a modelling process, which guides requirement engineers to capture a set of complete, traceable and comprehensible QR models for software system. The thesis presents a case study, which evaluates the practicality and applicability of the QRMF. Finally, the framework is evaluated theoretically, through comparing and contrasting related approaches found in the literature.
- Published
- 2014
19. Designing a knowledge management architecture to support self-organization in a hotel chain
- Author
-
Kaldis, Emmanuel, Zhao, Liping, and Snowdon, Robert
- Subjects
658.4 ,Knowledge Management Architecture, Knowledge Management Systems, Complexity, Self-organization, Emergence, Edge of Chaos, Complex Systems Model, Information Systems, IS Design - Abstract
Models are incredibly insidious; they slide undetected into discussions and then dominate the way people think. Since Information Systems (ISs) and particularly Knowledge Management Systems (KMSs) are socio-technical systems, they unconsciously embrace the characteristics of the dominant models of management thinking. Thus, their limitations can often be attributed to the deficiencies of the organizational models they aim to support. Through the case study of a hotel chain, this research suggests that contemporary KMSs in the hospitality sector are still grounded in the assumptions of the mechanistic organizational model which conceives an organization as a rigid hierarchical entity governed from the top. Despite the recent technological advances in terms of supporting dialogue and participation between members, organizational knowledge is still transferred vertically; from the top to the bottom or from the bottom to the top. A number of limitations still exist in terms of supporting effectively the transfer of knowledge horizontally between the geographically distributed units of an organization. Inspired from the key concepts of the more recent complex systems model, referred frequently as complexity theories, a Knowledge Management Architecture (KMA) is proposed aiming to re-conceptualize the existing KMSs towards conceiving an organization as a set self-organizing communities of practice (CoP). In every such CoP, order is created from the dynamic exchange of knowledge between the structurally similar community members. Thus, the focus of the KMA is placed on capturing systematically for reuse the architectural knowledge created upon every initiative for change and share such knowledge with the rest of the members of the CoP. A KMS was also developed to support the dynamic dimensions that the KMA proposes. The KMS was then applied in the case of the hotel chain, where it brought significant benefits which constitute evidence of an improved self-organizing ability. The previously isolated hotel units residing in distant regions could now trace but also reapply easily changes undertaken by the other community members. Top-management’s intervention to promote change was reduced, while the pace of change increased. Moreover, the organizational cohesion, the integration of new members as well as the level of management alertness was enhanced. The case of the hotel chain is indicative. It is believed that the KMA proposed can be applicable to geographically distributed organizations operating in different sectors too. At the same time, this research contributes to the recent discourse between the fields of IS and complexity by demonstrating how fundamental concepts from complexity such as self-organization, emergence and edge-of-chaos can be embraced by contemporary KMSs.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.