13 results on '"Tekli, Joe"'
Search Results
2. Comparing deep learning models for low-light natural scene image enhancement and their impact on object detection and classification: Overview, empirical evaluation, and challenges
- Author
-
Al Sobbahi, Rayan and Tekli, Joe
- Published
- 2022
- Full Text
- View/download PDF
3. Low-Light Homomorphic Filtering Network for integrating image enhancement and classification
- Author
-
Al Sobbahi, Rayan and Tekli, Joe
- Published
- 2022
- Full Text
- View/download PDF
4. Full-fledged semantic indexing and querying model designed for seamless integration in legacy RDBMS
- Author
-
Tekli, Joe, Chbeir, Richard, Traina, Agma J.M., Traina, Caetano, Jr., Yetongnon, Kokou, Ibanez, Carlos Raymundo, Al Assad, Marc, and Kallas, Christian
- Published
- 2018
- Full Text
- View/download PDF
5. Full-fledged semantic indexing and querying model designed for seamless integration in legacy RDBMS
- Author
-
joe.tekli@lau.edu.lb, Tekli, Joe, Chbeir, Richard, Traina, Agma J.M., Traina, Caetano, Yetongnon, Kokou, Ibanez, Carlos Raymundo, Al Assad, Marc, Kallas, Christian, joe.tekli@lau.edu.lb, Tekli, Joe, Chbeir, Richard, Traina, Agma J.M., Traina, Caetano, Yetongnon, Kokou, Ibanez, Carlos Raymundo, Al Assad, Marc, and Kallas, Christian
- Abstract
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado., In the past decade, there has been an increasing need for semantic-aware data search and indexing in textual (structured and NoSQL) databases, as full-text search systems became available to non-experts where users have no knowledge about the data being searched and often formulate query keywords which are different from those used by the authors in indexing relevant documents, thus producing noisy and sometimes irrelevant results. In this paper, we address the problem of semantic-aware querying and provide a general framework for modeling and processing semantic-based keyword queries in textual databases, i.e., considering the lexical and semantic similarities/disparities when matching user query and data index terms. To do so, we design and construct a semantic-aware inverted index structure called SemIndex, extending the standard inverted index by constructing a tightly coupled inverted index graph that combines two main resources: a semantic network and a standard inverted index on a collection of textual data. We then provide a general keyword query model with specially tailored query processing algorithms built on top of SemIndex, in order to produce semantic-aware results, allowing the user to choose the results' semantic coverage and expressiveness based on her needs. To investigate the practicality and effectiveness of SemIndex, we discuss its physical design within a standard commercial RDBMS allowing to create, store, and query its graph structure, thus enabling the system to easily scale up and handle large volumes of data. We have conducted a battery of experiments to test the performance of SemIndex, evaluating its construction time, storage size, query processing time, and result quality, in comparison with legacy inverted index. Results highlight both the effectiveness and scalability of our approach., This study is partly funded by the National Council for Scientific Research - Lebanon (CNRS-L), by the Lebanese American University (LAU), and the Research Support Foundation of the State of Sao Paulo ( FAPESP ). Appendix SemIndex Weighting Scheme We propose a set of weighting functions to assign weight scores to SemIndex entries, including: index nodes , index edges, data nodes , and data edges . The weighting functions are used to select and rank semantically relevant results w.r.t. the user's query (cf. SemIndex query processing in Section 5). Other weight functions could be later added to cater to the index designer's needs., Revisión por pares
- Published
- 2018
6. SemIndex+: A semantic indexing scheme for structured, unstructured, and partly structured data.
- Author
-
Tekli, Joe, Chbeir, Richard, Traina, Agma J.M., and Traina, Caetano
- Subjects
- *
DATA structures , *INFORMATION retrieval , *SEARCH engines , *DIGITAL libraries , *SEARCH algorithms - Abstract
Abstract While Information Retrieval (IR) systems have gained success in Web-style search engines in the past two decades, nonetheless, the DataBase (DB) paradigm remains prevalent in handling data in enterprise environments and digital libraries, and is gaining even more importance in the Semantic Web with the increasing need to handle partly structured (NoSQL) data. This paper describes SemIndex+ , a semantic-aware indexing and querying framework that allows semantic search, result selection, and result ranking of structured (relational DB-style), unstructured (IR-style), and partly structured (NoSQL) data. Various weighting functions and a parallelized search algorithm have been developed for that purpose and are presented here. We provide a general keyword query model allowing the user to choose the results' semantic coverage and expressiveness based on her needs. Different from alternative solutions involving query relaxation, query refinement, or query disambiguation, our approach incorporates semantics at the most basic data indexing level: providing more opportunities toward speedups and semantic coverage. An extensive experimental evaluation, comparing SemIndex+ with alternative methods, highlights our approach's flexibility and effectiveness, which in turn impact efficiency (requiring less or more time following the user specified index and query semantic coverages). Highlights • Search, select and rank unstructured, structured (relational) and partly structured (NoSQL) data. • Maps a textual data collection and a semantic knowledge base into a tightly-coupled semantic graph. • Involves users during semantic index creation, initial query formulation and query refinement. • Parallelized query processing, with a dedicated model for answer weighting and relevance scoring. • Comparative experiments with legacy methods highlight solution's flexibility & effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Building semantic trees from XML documents.
- Author
-
Tekli, Joe, Charbel, Nathalie, and Chbeir, Richard
- Abstract
The distributed nature of the Web, as a decentralized system exchanging information between heterogeneous sources, has underlined the need to manage interoperability , i.e., the ability to automatically interpret information in Web documents exchanged between different sources, necessary for efficient information management and search applications. In this context, XML was introduced as a data representation standard that simplifies the tasks of interoperation and integration among heterogeneous data sources, allowing to represent data in (semi-) structured documents consisting of hierarchically nested elements and atomic attributes. However, while XML was shown most effective in exchanging data, i.e., in syntactic interoperability , it has been proven limited when it comes to handling semantics, i.e., semantic interoperability , since it only specifies the syntactic and structural properties of the data without any further semantic meaning. As a result, XML semantic-aware processing has become a motivating challenge in Web data management, requiring dedicated semantic analysis and disambiguation methods to assign well-defined meaning to XML elements and attributes. In this context, most existing approaches: (i) ignore the problem of identifying ambiguous XML elements/nodes, (ii) only partially consider their structural relationships/context, (iii) use syntactic information in processing XML data regardless of the semantics involved, and (iv) are static in adopting fixed disambiguation constraints thus limiting user involvement. In this paper, we provide a new X ML S emantic D isambiguation F ramework titled XSDF designed to address each of the above limitations, taking as input: an XML document, and then producing as output a semantically augmented XML tree made of unambiguous semantic concepts extracted from a reference machine-readable semantic network. XSDF consists of four main modules for: (i) linguistic pre-processing of simple/compound XML node labels and values, (ii) selecting ambiguous XML nodes as targets for disambiguation, (iii) representing target nodes as special sphere neighborhood vectors including all XML structural relationships within a (user-chosen) range, and (iv) running context vectors through a hybrid disambiguation process, combining two approaches: concept-based and context-based disambiguation, allowing the user to tune disambiguation parameters following her needs. Conducted experiments demonstrate the effectiveness and efficiency of our approach in comparison with alternative methods. We also discuss some practical applications of our method, ranging over semantic-aware query rewriting, semantic document clustering and classification, Mobile and Web services search and discovery, as well as blog analysis and event detection in social networks and tweets. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Approximate XML structure validation based on document–grammar tree similarity.
- Author
-
Tekli, Joe, Chbeir, Richard, Traina, Agma J.M., Jr.Traina, Caetano, and Fileto, Renato
- Subjects
- *
XML (Extensible Markup Language) , *SOFTWARE validation , *GRAMMAR , *DOCUMENT classification , *SPACETIME - Abstract
Comparing XML documents with XML grammars, also known as XML document and grammar validation, is useful in various applications such as: XML document classification, document transformation, grammar evolution, XML retrieval, and the selective dissemination of information. While exact (Boolean) XML validation has been extensively investigated in the literature, the more general problem of approximate (similarity-based) XML validation, i.e., document–grammar similarity evaluation, has not yet received strong attention. In this paper, we propose an original method for measuring the structural similarity between an XML document and an XML grammar (DTD or XSD), considering their most common operators that designate constraints on the existence, repeatability and alternativeness of XML elements/attributes (e.g., ?, ∗ , MinOccurs , MaxOccurs , etc.). Our approach exploits the concept of tree edit distance, introducing a novel edit distance recurrence and dedicated algorithms to effectively compare XML documents and grammar structures, modeled as ordered labeled trees. Our method also inherently performs exact validation by imposing a maximum similarity threshold (minimum edit distance) on the returned results. We implemented a prototype and conducted several experiments on large sets of real and synthetic XML documents and grammars. Results underline our approach’s effectiveness in classifying similar documents with respect to predefined grammars, accurately detecting document and/or grammar modifications, and performing document and grammar relevance ranking. Time and space analysis were also conducted. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
9. A novel XML document structure comparison framework based-on sub-tree commonalities and label semantics.
- Author
-
Tekli, Joe and Chbeir, Richard
- Subjects
XML (Extensible Markup Language) ,DECISION trees ,SEMANTICS ,DOCUMENT clustering ,MATHEMATICAL models ,COMPARATIVE studies - Abstract
Abstract: XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
10. An overview on XML similarity: Background, current trends and future directions.
- Author
-
Tekli, Joe, Chbeir, Richard, and Yetongnon, Kokou
- Subjects
XML (Extensible Markup Language) ,DATABASE management ,MULTIMEDIA systems ,INFORMATION retrieval ,QUERYING (Computer science) ,DATA warehousing ,COMPARATIVE studies - Abstract
Abstract: In recent years, XML has been established as a major means for information management, and has been broadly utilized for complex data representation (e.g. multimedia objects). Owing to an unparalleled increasing use of the XML standard, developing efficient techniques for comparing XML-based documents becomes essential in the database and information retrieval communities. In this paper, we provide an overview of XML similarity/comparison by presenting existing research related to XML similarity. We also detail the possible applications of XML comparison processes in various fields, ranging over data warehousing, data integration, classification/clustering and XML querying, and discuss some required and emergent future research directions. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
11. Minimizing user effort in XML grammar matching
- Author
-
Tekli, Joe and Chbeir, Richard
- Subjects
- *
XML (Extensible Markup Language) , *HEURISTIC algorithms , *DECISION trees , *DATA analysis , *PROGRAMMING language semantics , *HETEROGENEOUS computing , *MATHEMATICAL mappings , *MATCHING theory - Abstract
Abstract: XML grammar matching has found considerable interest recently, due to the growing number of heterogeneous XML documents on the Web, and the need to integrate, search and retrieve XML documents originated from different data sources. In this study, we provide an approach for automatic XML grammar matching and comparison aiming to minimize the amount of user effort required to perform the match task. This requires (i) considering the various characteristics and constraints of XML grammars (in comparison with ‘grammar simplifying’ approaches), (ii) allowing a flexible combination of different matching criteria (in comparison with static approaches), and (iii) effectively considering the semi-structured nature of XML (in contrast with heuristic methods). To achieve this, we propose an extensible framework based on the concept of tree edit distance as an optimal technique to consider XML structure, integrating different matching criteria to capture all basic XML grammar characteristics, ranging over element semantic and syntactic similarities, cardinality and alternativeness constraints, as well as data-type correspondences and relative ordering. In addition, our framework is flexible, enabling the user to choose mapping cardinality (i.e., 1:1,1:n, n:1, n:n), in comparison with exiting static methods (usually constrained to 1:1). User constraints and feedback are equally considered in order to adjust matching results to the user’s perception of correct matches. Experiments on real and synthetic XML grammars demonstrate the effectiveness and efficiency of our matching strategy in identifying mappings, in comparison with alternative methods. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
12. Generic metadata representation framework for social-based event detection, description, and linkage.
- Author
-
Abebe, Minale A., Tekli, Joe, Getahun, Fekade, Chbeir, Richard, and Tekli, Gilbert
- Subjects
- *
METADATA , *SOCCER tournaments , *SOCIAL media , *MACHINE learning , *TRAFFIC accidents , *DATA modeling - Abstract
Various methods have been put forward to perform automatic social-based event detection and description. Yet, most of them do not capture the semantic meaning embedded in online social media data, which are usually highly heterogeneous and unstructured, and do not identify event relationships (e.g., car accident temporally occurs after storm, and geographically occurs near soccer match). To address this problem, we introduce a generic Social-based Event Detection, Description, and Linkage framework titled SEDDaL, taking as input: a collection of social media objects from heterogeneous sources (e.g., Flickr, YouTube, and Twitter), and producing as output a collection of semantically meaningful events interconnected with spatial, temporal, and semantic relationships. The latter are required as the building blocks for event-based Collective Knowledge (CK) organization, where CK underlines the combination of all known data, information, and metadata concerning a given concept or event. SEDDaL consists of four main modules for: i) describing social media objects in a generic Metadata Representation Space Model (MRSM) consisting of three composite dimensions: temporal, spatial, and semantic, ii) evaluating the similarity between social media objects' descriptions following MRSM, iii) detecting events from similar social media objects using an adapted unsupervised learning algorithm, where events are represented as clusters of objects in MRSM, and iv) identifying directional, metric, and topological relationships between events following MRSM's dimensions. We believe this is the first study to provide a generic model for describing semantic-aware events and their relationships extracted from social metadata on the Web. Experimental results confirm the quality and potential of our approach. • Performs semantic-aware event detection, description, and linkage from social media data. • Represents heterogeneous data in generic model made of temporal, spatial, & semantic dimensions. • Evaluates data similarity using combined temporal, spatial, and semantic similarity measures. • Detects events from similar social media objects using adapted unsupervised learning algorithm. • Describes events in generic model and identifies their directional, metric & topologic relations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. Unsupervised word-level affect analysis and propagation in a lexical knowledge graph.
- Author
-
Fares, Mireille, Moufarrej, Angela, Jreij, Eliane, Tekli, Joe, and Grosky, William
- Subjects
- *
GRAPH theory , *FEATURE extraction , *SENTIMENT analysis , *SEMANTICS , *POTENTIAL theory (Mathematics) - Abstract
Abstract Lexical sentiment analysis (LSA) is of central importance in extracting and analyzing user moods and views on the Web. Most existing LSA approaches have utilized supervised learning techniques applied on corpus-based statistics, requiring extensive training data, training time, and large statistical corpora which are not always available. Other studies have utilized unsupervised and lexicon-based approaches to match target words in a lexical knowledge base (KB) with seed words in a sentiment lexicon , usually suffering from the limited coverage or inconsistent connectivity of affective concepts. In this paper, we introduce LISA, an unsupervised word-level knowledge graph-based LSA framework. It uses different variants of shortest path graph navigation techniques to compute and propagate affective scores in a lexical-affective graph (LAG), created by connecting a typical lexical KB like WordNet, with a reliable affect KB like WordNet-Affect Hierarchy (where any other lexical or affective KB can be utilized). LISA was designed in two consecutive iterations, producing two main modules: i) LISA 1.0 for affect navigation, and ii) LISA 2.0 for affect propagation and lookup. LISA 1.0 suffered from the semantic connectivity problem shared by some existing lexicon-based methods, and required polynomial execution time. This led to the development of LISA 2.0, which i) processes affective relationships separately from lexical/semantic connections (solving the semantic connectivity problem of LISA 1.0), and ii) produces a sentiment lexicon which can be searched in logarithmic time (handling LISA 1.0's efficiency problem). Experimental results on the ANEW dataset show that our approach, namely LISA 2.0, while completely unsupervised, is on a par with existing (semi)supervised solutions, highlighting its quality and potential. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.