25 results on '"Muhammad Summair Raza"'
Search Results
2. Widely Used Techniques in Data Science Applications
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
ComputingMethodologies_PATTERNRECOGNITION ,Computer science ,Reinforcement learning ,Data science - Abstract
Prior to discussion of common techniques used in data science applications, firstly we will try to explain three types of learning which include supervised, unsupervised, and reinforcement learning.
- Published
- 2023
- Full Text
- View/download PDF
3. An Improved Approach for Finding Rough Set Based Dynamic Reducts
- Author
-
Asmat Iqbal, Muhammad Summair Raza, Muhammad Ibrahim, Abdullah Baz, Hosam Alhakami, and Muhammad Anwaar Saeed
- Subjects
Reduct ,General Computer Science ,Computational complexity theory ,Computer science ,020209 energy ,relative reducts ,General Engineering ,Dynamic reducts ,02 engineering and technology ,computer.software_genre ,parallel feature sample ,Set (abstract data type) ,Reduction (complexity) ,feature selection ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Data mining ,Rough set ,lcsh:TK1-9971 ,computer ,rough set theory ,Curse of dimensionality - Abstract
This is the era of information and the amount of data has immensely increased since the last few years. This increase has resulted in dilemmas like the curse of dimensionality where we need a large number of resources to process the huge number of features. So far there are many tools available that process such large volumes of data among which Rough Set Theory is a prominent one. It provides the concept of Reducts which represent the set of attributes that provide the maximum of the information. However, the problem with Reducts is that they are not stable and keep on changing with the addition of more and more data. So, the concept of Dynamic Reducts is introduced in the literature to provide a more stable version of Reducts. There are many Dynamic Reduct finding algorithms available; however, these dynamic Reduct finding algorithms are computationally too expensive and inefficient for large datasets. In this research, we have presented an improved dynamic Reduct finding technique based on rough set theory. In this technique, Reducts are selected, optimized, and further generalized through Parallel Feature Sampling (PFS) algorithm. An in-depth investigation is performed using various benchmark datasets to justify the effectiveness of our proposed approach. Results have shown that the proposed approach outperforms the state-of-the-art approaches in terms of both efficiency and effectiveness. Overall, 96% average accuracy is achieved and 46.13% reduction in execution time is observed by the proposed algorithm against the compared contemporary approaches.
- Published
- 2020
- Full Text
- View/download PDF
4. An In-Depth Empirical Investigation of State-of-the-Art Scheduling Approaches for Cloud Computing
- Author
-
Altaf Hussain, Karim Djemame, Muhammad Summair Raza, Muhammad Ibrahim, Hosam Alhakami, Said Nabi, Khaled Salah, and Abdullah Baz
- Subjects
task scheduling ,General Computer Science ,Computer science ,Distributed computing ,load balancing ,resource allocation ,Cloud computing ,02 engineering and technology ,scheduling algorithms ,Scheduling (computing) ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,020203 distributed computing ,Job shop scheduling ,business.industry ,General Engineering ,020206 networking & telecommunications ,Energy consumption ,Load balancing (computing) ,performance evaluation ,Analytics ,Scalability ,Resource allocation ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 - Abstract
Recently, Cloud computing has emerged as one of the widely used platforms to provide compute, storage and analytics services to end-users and organizations on a pay-as-you-use basis, with high agility, availability, scalability, and resiliency. This enables individuals and organizations to have access to a large pool of high processing resources without the need for establishing a high-performance computing (HPC) platform. From the past few years, task scheduling in Cloud computing is reckoned as eminent recourse for researchers. However, task scheduling is considered an NP-hard problem. In this research work, we investigate and empirically compare some of the most prominent state-of-the-art scheduling heuristics in terms of Makespan, Average resource utilization (ARUR), Throughput, and Energy consumption. The comparison is then extended by evaluating the approaches in terms of individual VM level load imbalance. After extensive simulation, the comparative analysis has revealed that Task Aware Scheduling Algorithm (TASA) and Proactive Simulation-based Scheduling and Load Balancing (PSSLB) outperformed as compared to the rest of the approaches and seems to be optimal choice keeping in view the trade-of between the complexities involved and the performance achieved concerning Makespan, Throughput, resource utilization, and Energy consumption.
- Published
- 2020
5. A Comparative Analysis of Task Scheduling Approaches in Cloud Computing
- Author
-
Muhammad Summair Raza, Said Nabi, Muhammad Imran, S. M. Ahsan Kazmi, Muhammad Ibrahim, Rasheed Hussain, Alma Oracevic, and Fatima Hussain
- Subjects
Cloud resources ,Job shop scheduling ,Computer science ,business.industry ,Analytics ,Distributed computing ,Scalability ,CloudSim ,Cloud computing ,Load balancing (computing) ,business ,Scheduling (computing) - Abstract
Recently, cloud computing has emerged as a primary enabling technology to provide compute, storage, platform, and analytics services to end-users and organizations based on pay-as-you-use. In essence, cloud provides agility, availability, scalability, and resiliency. However, increased number of users leads to issues such as scheduling of requests, demands, and work-load efficiency over the available cloud resources. Similarly, since the inception of cloud computing, task scheduling is reckoned as an essential ingredient in the commercial value of this technology. Task scheduling is considered as an NP-hard problem in cloud computing and different solutions exist in the literature to address this issue. In this paper, we investigate and empirically compare some of the recent state-of-the-art scheduling mechanisms in cloud computing with respect to Makespan (the time difference between the start and finish of a sequence of jobs or tasks) and throughput (number of tasks successfully executed per unit time (Makespan)). We then extend the comparison by evaluating the considered approaches with respect to Average Resource Utilization Ratio (ARUR). We also recommend and identify factors that can improve resource utilization and maximize revenue-generation for cloud service providers.
- Published
- 2020
- Full Text
- View/download PDF
6. A heuristic based dependency calculation technique for rough set theory
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
0209 industrial biotechnology ,Dependency (UML) ,Computer science ,Heuristic ,Feature selection ,02 engineering and technology ,Data structure ,computer.software_genre ,Set (abstract data type) ,020901 industrial engineering & automation ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Data mining ,Rough set ,Heuristics ,computer ,Software - Abstract
Feature selection is the process of selecting subset of features that still provide maximum amount of the information that otherwise is provided by the entire set of conditional attributes. Many approaches have been proposed so far in literature for this purpose. Recently the rough set based approaches have become dominant. Majority of these approaches use attribute dependency to find significance of attributes. Problem with this measure is that it uses positive region to calculate dependency which is a computationally expensive job. As a consequence, it degrades the performance of the feature selection algorithms using this measure. In this paper, we have proposed a new heuristic based dependency calculation technique by avoiding the positive region. The proposed method uses a heuristics approach by finding the consistent records regarding each decision class in the dataset. Using this method, allows us to calculate dependency by avoiding the positive region, which ultimately enhances the computational efficiency of the underlying feature selection algorithm thus enabling it to be used for dataset beyond smaller size. In order to calculate dependency by using the proposed method, we have used a two-dimensional grid as intermediate data structure. Number of feature selection algorithms were used with proposed solution on various publically available datasets to justify it. A comparison framework was used to compare the proposed solution with conventional methods. Results have justified the proposed solution both in terms of its efficiency and effectiveness.
- Published
- 2018
- Full Text
- View/download PDF
7. Redefining core preliminary concepts of classic Rough Set Theory for feature selection
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Reduct ,0209 industrial biotechnology ,Relation (database) ,Computer science ,Dominance-based rough set approach ,Feature selection ,02 engineering and technology ,computer.software_genre ,Measure (mathematics) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,Electrical and Electronic Engineering ,computer - Abstract
Data is growing at an exponential pace. To cope with this data explosion, we need effective data processing and analysis techniques. Feature selection is selecting a subset of features from a dataset that still provides most of the useful information. Various tools are available as underlying framework for this process however, Rough Set Theory is the most prominent tool due to its analysis friendly nature. Majority of Rough Set based feature selection algorithms use positive region based dependency measure as the sole criteria to select feature subset. Calculating positive region requires calculation of lower approximation which consequently involves indiscernibility relation. In this paper, new definitions of two Rough Set preliminaries i.e. lower and upper rough set approximation are proposed. New definitions of approximations are computationally less expensive as compared to the conventional. The proposed redefinitions showed 42.78% decrease in execution time for redefined lower approximation and 43.06% decrease in case of redefined upper approximation, for five publicly available datasets while maintaining 100% accuracy. Finally based on these redefined approximations we proposed a feature selection algorithm, which when compared with state of the art techniques showed significant increase in performance without the affecting the accuracy.
- Published
- 2017
- Full Text
- View/download PDF
8. Data Science Programming Languages
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Computer science ,Programming language ,R Programming Language ,Python (programming language) ,computer.software_genre ,computer ,Data science ,computer.programming_language - Abstract
In this chapter we will discuss two programming languages commonly used for data science projects, i.e. Python and R programming languages. The reason behind is that a large community is using these languages and there are lot of libraries available online for these. First Python will be discussed, and in later part we will discuss R programming language.
- Published
- 2020
- Full Text
- View/download PDF
9. An incremental dependency calculation technique for feature selection using rough sets
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
0209 industrial biotechnology ,Information Systems and Management ,Dependency (UML) ,Computer science ,Feature selection ,02 engineering and technology ,computer.software_genre ,Measure (mathematics) ,Theoretical Computer Science ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,business.industry ,Process (computing) ,Pattern recognition ,Computer Science Applications ,Task (computing) ,Control and Systems Engineering ,Feature (computer vision) ,Pattern recognition (psychology) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Rough set ,Data mining ,business ,computer ,Software - Abstract
In many fields, such as data mining, machine learning and pattern recognition, datasets containing large numbers of features are often involved. In such cases, feature selection is necessary. Feature selection is the process of selecting a feature subset on behalf of the entire dataset for further processing. Recently, rough set-based approaches, which use attribute dependency to carry out feature selection, have been prominent. However, this dependency measure requires the calculation of the positive region, which is a computationally expensive task. In this paper, we have proposed a new concept called the "Incremental Dependency Class" (IDC), which calculates the attribute dependency without using the positive region. IDCs define the change in attribute dependency as we move from one record to another. IDCs, by avoiding the positive region, can be an ideal replacement for the conventional dependency measure in feature selection algorithms, especially for large datasets. Experiments conducted using various publically available datasets from the UCI repository have shown that calculating dependency using IDCs reduces the execution time by 54%, while in the case of feature selection algorithms using IDCs, the execution time was reduced by almost 66%. Overall, a 68% decrease in required runtime memory was also found.
- Published
- 2016
- Full Text
- View/download PDF
10. An optimized method to calculate approximations in Dominance based Rough Set Approach
- Author
-
Aleena Ahmad, Muhammad Summair Raza, and Usman Qamar
- Subjects
0209 industrial biotechnology ,Computer science ,Computation ,Dominance-based rough set approach ,02 engineering and technology ,Reduction (complexity) ,Task (computing) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Categorical variable ,Algorithm ,Preference (economics) ,Software - Abstract
Classical Rough Set Theory (RST) is a prominent tool to deal with uncertainty of categorical data. However, it does not consider the preference order between the values of the attributes. Dominance Based Rough Set Approach (DRSA) provides dominance relation in this regard. Computation of the upper and lower approximation is a critical step in DRSA. However, computing these approximations is a computationally expensive task. Efficiently computing approximations will thus be helpful in reducing the execution time of algorithms using these approximations. In this paper, we have proposed an efficient approach to compute these measures. The proposed approach directly calculates approximations without considering the objects that do not play any role in the approximations. In our approach, one instance of a dataset is compared with another instance only once which avoids unnecessary comparisons. The proposed approach is compared with the conventional method using sixteen benchmark data sets from UCI. Results show that the proposed approach significantly reduces the execution time. The average reduction in the execution time was found to be almost 85%. This approach also reduces the memory consumption by 75%. The Big-O complexity is also reduced. These measures justify that the proposed approach is more effective and efficient as compared to the conventional DRSA.
- Published
- 2020
- Full Text
- View/download PDF
11. FIPA-based reference architecture for efficient discovery and selection of appropriate cloud service using cloud ontology
- Author
-
Muhammad Ibrahim, Amjad Mehmood, Ghulam Abbas, Jaime Lloret, and Muhammad Summair Raza
- Subjects
Thesaurus (information retrieval) ,Computer Networks and Communications ,Cloud ontology ,business.industry ,Computer science ,Distributed computing ,JADE (programming language) ,Cloud computing ,Reference architecture ,Electrical and Electronic Engineering ,business ,computer ,Selection (genetic algorithm) ,computer.programming_language - Published
- 2020
- Full Text
- View/download PDF
12. Introduction to Classical Rough Set Based APIs Library
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Code (set theory) ,Source code ,Programming language ,Computer science ,media_common.quotation_subject ,Research community ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Rough set ,Line (text file) ,computer.software_genre ,computer ,media_common - Abstract
In this chapter, we will provide some implementation of some basic functions of Rough Set Theory. Implementation of RST functions can be found in other libraries as well. The major aspect here is that source code is also provided with each and every line explained. The explanation in this way will help the research community to not only easily use the code but also they can modify as per their own research requirements.
- Published
- 2019
- Full Text
- View/download PDF
13. Advanced Concepts in Rough Set Theory
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Computer science ,Fuzzy set ,Calculus ,Rough set - Abstract
In the last chapter, we discussed some basic concepts of Rough Set Theory. In this chapter, we will present some advanced concepts including some improved definitions, their examples and hybridization of RST with fuzzy set theory.
- Published
- 2019
- Full Text
- View/download PDF
14. Dominance Based Rough Set APIs Library
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Class (computer programming) ,Source code ,Dominance (economics) ,Computer science ,media_common.quotation_subject ,Dominance relation ,Calculus ,Point (geometry) ,Rough set ,Visual Basic for Applications ,Logic programming ,media_common - Abstract
In this chapter we will present VBA source code for calculating approximations. We will calculate both lower and upper approximations along with dominance relation and class unions. The main intention of the chapter is to clear the programming logic behind calculating these measures. At this point it is referred to have some basic tutorial about VBA.
- Published
- 2019
- Full Text
- View/download PDF
15. Rough Set Theory Based Feature Selection Techniques
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Computer science ,business.industry ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Pattern recognition ,Feature selection ,Artificial intelligence ,Rough set ,business - Abstract
Rough Set Theory has been successfully used for feature selection techniques. The underlying concepts provided by RST help in finding representative features by eliminating the redundant ones. In this chapter, we will present various feature selection techniques which use RST concepts.
- Published
- 2019
- Full Text
- View/download PDF
16. Fuzzy Rough Sets
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
Fuzzy rough set theory ,Computer science ,Core (graph theory) ,Fuzzy rough sets ,Algorithm - Abstract
In this chapter, we will discuss the Fuzzy Rough Set Theory. Some core preliminaries will be presented along with necessary details. We will also discuss some state-of-the-art fuzzy rough set approaches from the literature.
- Published
- 2019
- Full Text
- View/download PDF
17. Rough Set Theory
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Set (abstract data type) ,Range (mathematics) ,Theoretical computer science ,Decision system ,Computer science ,Information system ,Feature selection ,Rough set ,Data structure - Abstract
This chapter discusses the basic preliminaries of rough set theory (RST). Since its inception, RST has been a prominent tool for data analysis due to its analysis friendly nature. RST provides a range of data structures, e.g. information systems, decision systems and approximations, to represent the real-world data. Furthermore, it provides various methods to help analyse this data. This chapter discusses the basic concepts of RST with example to set a strong foundation of RST to be used as feature selection.
- Published
- 2019
- Full Text
- View/download PDF
18. Introduction to Feature Selection
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
Computer science ,020204 information systems ,Research community ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Feature selection ,02 engineering and technology ,Data science - Abstract
This is an era of information. However, the data is only valuable if it is efficiently processed and useful information is derived out of it. It is now common to find applications that require data with thousands of attributes. Problem with processing such datasets is that they require huge amount of resources. To overcome this issue, research community has come up with an effective tool called feature selection. Feature selection lets us select only relevant data that we can use on behalf of the entire dataset. In this chapter we will discuss necessary preliminaries of feature selection.
- Published
- 2019
- Full Text
- View/download PDF
19. Advance Concepts in RST
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
Computer science ,Fuzzy set ,Calculus ,Rough set - Abstract
In the last chapter, we discussed some basic concepts of rough set theory. In this chapter we will present advance concepts in RST including some improved definitions, their examples and hybridization with fuzzy set theory.
- Published
- 2017
- Full Text
- View/download PDF
20. Critical Analysis of Feature Selection Algorithms
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
Truncation selection ,Computer science ,Feature (computer vision) ,Dimensionality reduction ,Supervised learning ,Unsupervised learning ,Minimum redundancy feature selection ,Probabilistic analysis of algorithms ,Feature selection ,Algorithm - Abstract
So far in previous chapters, we have discussed details of various feature selection algorithms, both rough set based and non-rough set based, for supervised learning and unsupervised learning. In this chapter we will provide analysis of different RST-based feature selection algorithms. With explicit discussion on their results, different experiments were performed to compare the performance of algorithms. We will focus on RST-based feature selection algorithms.
- Published
- 2017
- Full Text
- View/download PDF
21. RST Source Code
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Source code ,Programming language ,Computer science ,media_common.quotation_subject ,Microsoft excel ,Visual Basic for Applications ,computer.software_genre ,Research community ,Line (geometry) ,Code (cryptography) ,Rough set ,Function (engineering) ,computer ,media_common - Abstract
In this chapter we will provide some implementation of some basic functions of rough set theory. Implementation of RST functions can be found in other libraries as well. The major aspect here is that the source code is also provided with each and every line explained. The explanation in this way will help research community to not only easily use the code but also they can modify as per their own research requirements. We have used Microsoft Excel VBA to implement the function. The reason behind is that VBA provides easy implementation and almost any of the dataset can easily be loaded in to the Excel. We will not only provide implementation of some of the basic RST concepts but also complete implementation and explanation of the source code of some of the most common algorithms like PSO, GA, QuickReduct, etc.
- Published
- 2017
- Full Text
- View/download PDF
22. Unsupervised Feature Selection Using RST
- Author
-
Usman Qamar and Muhammad Summair Raza
- Subjects
Class information ,ComputingMethodologies_PATTERNRECOGNITION ,Section (archaeology) ,Feature (computer vision) ,Computer science ,business.industry ,Feature selection ,Pattern recognition ,Artificial intelligence ,Rough set ,business ,Class (biology) - Abstract
Supervised feature selection evaluates the features that provide maximum information based on classification accuracy. This requires labelled data; however, in real world not all the data is properly labelled, so we may come across the situation where little or no class information is provided. For such type of data, we need unsupervised feature selection information that could find feature subsets without given any class labels. In this section, we will discuss some of the unsupervised feature subset algorithms based on rough set theory.
- Published
- 2017
- Full Text
- View/download PDF
23. A Rough Set Based Feature Selection Approach Using Random Feature Vectors
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
business.industry ,Computer science ,Feature vector ,Dimensionality reduction ,Feature extraction ,Kanade–Lucas–Tomasi feature tracker ,020207 software engineering ,Feature selection ,Pattern recognition ,02 engineering and technology ,computer.software_genre ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,Artificial intelligence ,business ,Cluster analysis ,computer - Abstract
Feature selection is the process of selecting a subset of features that provides maximum of the information, present otherwise in entire dataset. The process is very helpful when input for different tasks including classification, clustering, rule extraction and many others, is large. Rough Set Theory, right from its emergence, has been widely used for feature selection due to its analysis friendly nature. Various approaches exist in literature for this purpose. However, majority of them are computationally too expensive and suffer a significant performance bottleneck. In this paper we have proposed a new feature selection approach based on rough set theory, using random feature vector generation method. The proposed approach is a two steps method. At first, it generates a random feature vector and verifies its suitability for being a potential candidate solution. If it fulfills the criteria, it is selected and optimized, otherwise a new subset is formed. The proposed approach was verified using five publicly available datasets. Results have shown that proposed approach is computationally more efficient and produces optimal results.
- Published
- 2016
- Full Text
- View/download PDF
24. A hybrid feature selection approach based on heuristic and exhaustive algorithms using Rough set theory
- Author
-
Muhammad Summair Raza and Usman Qamar
- Subjects
Heuristic ,Computer science ,Heuristic (computer science) ,business.industry ,Brute-force search ,Particle swarm optimization ,020207 software engineering ,Feature selection ,02 engineering and technology ,Machine learning ,computer.software_genre ,Feature (computer vision) ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,Artificial intelligence ,business ,computer ,Curse of dimensionality - Abstract
A dataset may have many irrelevant and unnecessary features, which not only increase computational space but also lead to a very critical phenomenon called curse of dimensionality. Feature selection process aims at selecting some relevant features for further processing on behalf of the entire dataset. However, to extract such information is non-trivial task, especially for large datasets. In literature many feature selection approaches have been proposed but recently rough set based heuristic approaches have become prominent ones. However, these approaches do not ensure the optimum solution. In this paper, a hybrid approach for feature selection has been proposed, based on heuristic algorithm and exhaustive search. Heuristic algorithm finds initial feature subset which is then further optimized by exhaustive search. We have used genetic algorithm and particle swarm optimization as preprocessor and relative dependency for optimization. Experiments have shown that our proposed approach is more effective and efficient as compared to the conventional relative dependency based approach.
- Published
- 2016
- Full Text
- View/download PDF
25. An integrated approach for developing semantic-mismatch free commercial off the shelf (COTS) components
- Author
-
Muhammad Summair Raza, Ahmad Mohsin, and Shafqat Hussain Majoka
- Subjects
Time delay and integration ,Software ,Computer science ,business.industry ,Process (engineering) ,Distributed computing ,Embedded system ,Component (UML) ,Fault tolerance ,Resolution (logic) ,business ,Commercial off-the-shelf ,Task (project management) - Abstract
Software in the modern age are mostly developed by the integration of pre fabricated COTS components as it is the simplest way to develop systems quickly consuming lesser cost as compared to the traditional development approaches. The promising features of component-based-software-engineering (CBSE) have introduced new idea of assembling-software rather than building them.Assembling software in this way alternatively results in rapid development, lesser cost with quality software assembled from pre tested COTS components. However the task is not as easy as it appears apparently. Assembling software from the existing components presents other challenges among which the "integration time mismatches" is the one.Various strategies have been proposed to overcome these mismatches each requiring some external mechanism outside the component to solve them.This paper is an endeavour to provide an integrated approach for resolving integration time semantic mismatches. It enables COTS components to detect semantic mismatches and resolve them by themselves, thus letting the COTS component itself to participate in mismatch resolution process.The external mediation, in this way, will be reduced up to maximum, resulting in a smooth integration process with a cut-down in integration cost. The proposed approach will further enhance the fault tolerance capabilities of COTS components as it is used more and more.
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.