9,712 results
Search Results
2. HTPosum:Heterogeneous Tree Structure augmented with Triplet Positions for extractive Summarization of scientific papers
- Author
-
Zhu, Zhenfang, Gong, Shuai, Qi, Jiangtao, and Tong, Chunling
- Published
- 2024
- Full Text
- View/download PDF
3. Integrity verification for scientific papers: The first exploration of the text
- Author
-
Shi, Xiang, Liu, Yinpeng, Liu, Jiawei, Cheng, Qikai, and Lu, Wei
- Published
- 2024
- Full Text
- View/download PDF
4. Developing a fuzzy optimized model for selecting a maintenance strategy in the paper industry: An integrated FGP-ANP-FMEA approach
- Author
-
Behnia, Foroogh, Zare Ahmadabadi, Habib, Schuelke-Leech, Beth-Anne, and Mirhassani, Mitra
- Published
- 2023
- Full Text
- View/download PDF
5. MARec: A multi-attention aware paper recommendation method
- Author
-
Wang, Jie, Zhou, Jingya, Wu, Zhen, and Sun, Xigang
- Published
- 2023
- Full Text
- View/download PDF
6. OpenMetaRec: Open-metapath heterogeneous dual attention network for paper recommendation
- Author
-
Xiao, Xia, Huang, Jiaying, Wang, Haobo, Zhang, Chengde, and Chen, Xinzhong
- Published
- 2023
- Full Text
- View/download PDF
7. HetTreeSum: A Heterogeneous Tree Structure-based Extractive Summarization Model for Scientific Papers
- Author
-
Zhao, Jintao, Yang, Libin, and Cai, Xiaoyan
- Published
- 2022
- Full Text
- View/download PDF
8. Mutually reinforced network embedding: An integrated approach to research paper recommendation
- Author
-
Mei, Xin, Cai, Xiaoyan, Xu, Sen, Li, Wenjie, Pan, Shirui, and Yang, Libin
- Published
- 2022
- Full Text
- View/download PDF
9. PSRMTE: Paper submission recommendation using mixtures of transformer
- Author
-
Nguyen, Dac Huu, Huynh, Son Thanh, Dinh, Cuong Viet, Huynh, Phong Tan, and Nguyen, Binh Thanh
- Published
- 2022
- Full Text
- View/download PDF
10. Extraction and evaluation of formulaic expressions used in scholarly papers
- Author
-
Iwatsuki, Kenichi, Boudin, Florian, and Aizawa, Akiko
- Published
- 2022
- Full Text
- View/download PDF
11. Citation recommendation using semantic representation of cited papers’ relations and content
- Author
-
Zhang, Jinzhu and Zhu, Lipeng
- Published
- 2022
- Full Text
- View/download PDF
12. FAST2: An intelligent assistant for finding relevant papers
- Author
-
Yu, Zhe and Menzies, Tim
- Published
- 2019
- Full Text
- View/download PDF
13. HTPosum:Heterogeneous Tree Structure augmented with Triplet Positions for extractive Summarization of scientific papers
- Author
-
Zhu, Zhenfang, primary, Gong, Shuai, additional, Qi, Jiangtao, additional, and Tong, Chunling, additional
- Published
- 2023
- Full Text
- View/download PDF
14. Integrity Verification for Scientific Papers: The first exploration of the text
- Author
-
Shi, Xiang, primary, Liu, Yinpeng, additional, Liu, Jiawei, additional, Cheng, Qikai, additional, and Lu, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
15. Fluctuating demand and its impacts to a paper producer: Customer analysis
- Author
-
Hämäläinen, Esa, Hilmola, Olli-Pekka, and Hetemäki, Lauri
- Published
- 2015
- Full Text
- View/download PDF
16. Position paper: Benchmarking the performance of global and emerging knowledge cities
- Author
-
Yigitcanlar, Tan
- Published
- 2014
- Full Text
- View/download PDF
17. HetTreeSum: A Heterogeneous Tree Structure-based Extractive Summarization Model for Scientific Papers
- Author
-
Jintao Zhao, Libin Yang, and Xiaoyan Cai
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
18. Mutually reinforced network embedding: An integrated approach to research paper recommendation
- Author
-
Xin Mei, Xiaoyan Cai, Sen Xu, Wenjie Li, Shirui Pan, and Libin Yang
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
19. PSRMTE: Paper submission recommendation using mixtures of transformer
- Author
-
Dac Huu Nguyen, Son Thanh Huynh, Cuong Viet Dinh, Phong Tan Huynh, and Binh Thanh Nguyen
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
20. FAST2: An intelligent assistant for finding relevant papers
- Author
-
Tim Menzies and Zhe Yu
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Information retrieval ,D.2.0 ,Computer science ,I.2.7 ,Human error ,General Engineering ,02 engineering and technology ,Computer Science Applications ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,020901 industrial engineering & automation ,Systematic review ,Artificial Intelligence ,68N01, 68T50 ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Selection (linguistics) ,Domain knowledge ,020201 artificial intelligence & image processing ,Review process - Abstract
Literature reviews are essential for any researcher trying to keep up to date with the burgeoning software engineering literature. FAST$^2$ is a novel tool for reducing the effort required for conducting literature reviews by assisting the researchers to find the next promising paper to read (among a set of unread papers). This paper describes FAST$^2$ and tests it on four large software engineering literature reviews conducted by Wahono (2015), Hall (2012), Radjenovi\'c (2013) and Kitchenham (2017). We find that FAST$^2$ is a faster and robust tool to assist researcher finding relevant SE papers which can compensate for the errors made by humans during the review process. The effectiveness of FAST$^2$ can be attributed to three key innovations: (1) a novel way of applying external domain knowledge (a simple two or three keyword search) to guide the initial selection of papers---which helps to find relevant research papers faster with less variances; (2) an estimator of the number of remaining relevant papers yet to be found---which in practical settings can be used to decide if the reviewing process needs to be terminated; (3) a novel self-correcting classification algorithm---automatically corrects itself, in cases where the researcher wrongly classifies a paper., Comment: 20+3 pages, 6 figures, 5 tables, and 4 algorithms. Accepted by Journal of Expert Systems with Applications
- Published
- 2019
21. Position paper: Benchmarking the performance of global and emerging knowledge cities
- Author
-
Tan Yigitcanlar
- Subjects
Strategic planning ,Operations research ,Policy making ,Knowledge economy ,General Engineering ,Knowledge City ,Urban policy ,Benchmarking ,Computer Science Applications ,Artificial Intelligence ,Urban planning ,Political science ,Regional science ,Position paper - Abstract
Knowledge-based development has become a new urban policy approach for the competitive cities of the global knowledge economy era. For those cities seeking a knowledge-based development, benchmarking is an essential prerequisite for informed and strategic vision and policy making to achieve a prosperous development. Nevertheless, benchmarked knowledge-based development performance analysis of global and emerging knowledge cities is an understudied area. This paper aims to contribute to the field by introducing the methodology of a novel performance assessment model—that is the Knowledge-Based Urban Development Assessment Model—and providing lessons from the application of the model in an international knowledge city performance analysis study. The assessment model puts renowned global and emerging knowledge cities—that are Birmingham, Boston, Brisbane, Helsinki, Istanbul, Manchester, Melbourne, San Francisco, Sydney, Toronto, and Vancouver—under the knowledge-based development microscope. The results of the analysis provide internationally benchmarked snapshot of the degree of achievements in various knowledge-based urban development performance areas of the investigated knowledge cities, and reveals insightful lessons on scrutinizing the global perspectives on knowledge-based development of cities.
- Published
- 2014
22. Citation recommendation using semantic representation of cited papers’ relations and content
- Author
-
Lipeng Zhu and Jinzhu Zhang
- Subjects
Information retrieval ,Computer science ,Language change ,General Engineering ,Computer Science Applications ,Artificial Intelligence ,Content (measure theory) ,Similarity (psychology) ,Selection (linguistics) ,Graph (abstract data type) ,Macro ,Representation (mathematics) ,Citation ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) - Abstract
Citation recommendation can help researchers quickly find supplementary or alternative references in massive academic resources. Current research on citation recommendation mainly focuses on the citing papers, resulting in the enormous cited papers are ignored, including the relations among cited papers and their citation context cited in citing papers. Moreover, cited paper’s content is often denoted with its original title and abstract, which is hard to acquire and rarely considers different citation motivations. Furthermore, the most appropriate method for semantic representation of cited papers’ relations and content is uncertain. Therefore, this paper studies citation recommendation from the perspective of semantic representation of cited papers’ relations and content. Firstly, four forms of citation context are designed and extracted as cited papers’ content considering citation motivations, as well as co-citation relationships are extracted as cited papers’ relations. Secondly, 132 methods are designed for generating semantic vector of cited paper, including four network embedding methods, 16 methods by combining four text representation algorithms with four forms of citation content, and 112 fusion methods. Finally, similarity among cited papers is calculated for citation recommendation and a quantitative evaluation method based on link prediction is designed, to find the most appropriate form of citation content and the optimal method. The result shows that doc2vecC (Document to Vector through Corruption) with the form of CS&SS (Current Sentences and Surrounding Sentences) performs best, in which the AUC (Area Under Curve) and MAP (Macro Average Precision) reach 0.877 and 0.889 and have increased by 0.462 and 0.370 compared with the worst-performing method. This performance is slightly improved by parameters adjustment, and a case study is performed whose results have further proved the effectiveness of this method. In addition, among four forms of cited papers’ content, CS&SS performs best in almost all methods. Furthermore, the fusion methods not always perform better than the single methods, where doc2vecC (CS&SS) performs better than the best fusion method GCN (Graph Convolutional Network). These results not only prove the effectiveness of citation recommendation from the perspective of cited paper, but also provide helpful and useful suggestions for method selection and citation content selection. The data and conclusions can be extended to other text mining-related tasks. Simultaneously, it is a preliminary research which needs to be further studied in other domains using emerging semantic representation methods.
- Published
- 2022
23. Fluctuating demand and its impacts to a paper producer: Customer analysis
- Author
-
Lauri Hetemäki, Esa Hämäläinen, and Olli-Pekka Hilmola
- Subjects
Customer retention ,ta214 ,ta511 ,Customer profitability ,General Engineering ,Cost accounting ,Price discrimination ,Computer Science Applications ,Bargaining power ,Customer base ,Artificial Intelligence ,Business ,Fixed cost ,Customer to customer ,Industrial organization - Abstract
Paper consumption has been on decline in USA and West Europe for years.Significant capacity reduction in manufacturing capacity has taken place.To manage such environment successfully information systems and customer base require attention.Even mature industries offer price differentiation potential among customer base.Customer base changes can be extremely rapid in markets on the decline. For the Nordic paper industry the years 2001-2007 were a culmination point as to paper production and deliveries. The study uses plant level empirical time series data from these years from one large integrated Finnish paper mill. The research data covers complete customer data and cost components, and a major supplier for European paper markets. The case company worked actively with a customer base and concentrated on the most profitable markets. However, some unprofitable deliveries, which still covered the variable and fixed costs, also supported the operation of the mill and the continuing of the 24/7 shift. The results indicate that even with matured and bulky products like paper, it is still possible to operate on the basis of separated pricing, some bargaining power, and customized focus. In this respect, the markets do not seem to follow the economic theory expectation entirely, that is, for such a mature and bulky product market price differentiation should not be possible. However, to utilize this, it requires up-to-date information system concerning internal cost accounting, together with an emphasis on the management to have an active role with the customer base.
- Published
- 2015
24. Multi-objective closed-loop supply chain network design: A novel robust stochastic, possibilistic, and flexible approach.
- Author
-
Hosseini Dehshiri, Seyyed Jalaladdin, Amiri, Maghsoud, Olfat, Laya, and Pishvaee, Mir Saman
- Subjects
- *
SUPPLY chains , *PAPER products , *CARBON emissions , *ENVIRONMENTAL auditing , *SUSTAINABLE development , *REMANUFACTURING - Abstract
• Offering a novel fuzzy robust approach for closed-loop supply chain network design. • Examining the hybrid uncertainties and flexibility of constraints in the problem. • Considering economic, responsibility, and environmental subjects in modeling. • Using the interactive fuzzy programming approach to solve the multi-objective model. • Introducing a new application for stone paper closed-loop supply chain network design. Nowadays, the production of stone paper, in addition to its widespread utilization in various fields, does not require water consumption, cutting down trees, and stone paper products are easily recyclable and recoverable. Due to the importance of developing and using stone paper and paying attention to environmental matters, Closed-Loop Supply Chain Network Design (CLSCND) is very important for stone paper products. Moreover, due to epistemic, randomness uncertainties, the uncertainties in Objective Function (OF), and the flexible constraints for CLSCND in the real world, this paper introduces a novel Mixed Robust Stochastic, Possibilistic, and Flexible Programming (MRSPFP) approach based on credibility theory. The different attitudes of the Decision-Makers (DMs) are addressed by a more flexible measurement of the optimistic and pessimistic parameters using the criterion of credibility. In the study, a comprehensive procedure is proposed for stone paper CLSCND, to minimize costs, increase responsiveness by minimizing transit time between different facilities, and regarding environmental concerns into account through minimizing carbon emissions. The model is solved utilizing an interactive fuzzy programming solution procedure and the Best-Worst Method (BWM). The results of sensitivity analysis, the effect of changing the problem parameters, and the performance of the proposed model are investigated and compared. The results show that the MRSPFP model has a better performance compared to other previous models. The MRSPFP model performs better in strategic decisions that require high investment costs due to the minimization of the absolute deviation of OF from its mean. Also, the applied results of the study show that CLSCND has a good capability and potential for sustainable development in the field of stone paper. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Toward energy-efficient online Complete Coverage Path Planning of a ship hull maintenance robot based on Glasius Bio-inspired Neural Network.
- Author
-
Muthugala, M.A. Viraj J., Samarakoon, S.M. Bhagya P., and Elara, Mohan Rajesh
- Subjects
- *
SHIP maintenance , *PAPER arts , *ENERGY consumption , *ROBOTS , *NAVAL architecture - Abstract
Regular Ship hull maintenance is an essential for sustainability. The maintenance work of ship hulls that involve human labor suffers from many shortcomings. Maintenance robots have been introduced for drydocks to eliminate these shortcomings. An energy-efficient Complete Coverage Path Planning (CCPP) is a crucial requirement from a ship hull maintenance robot. This paper proposes a novel energy-efficient CCPP method based on Glasius Bioinspired Neural Network (GBNN) for a ship hull inspection robot. The proposed method accounts for a comprehensive energy model for path planning. This energy model reflects the energy usage of a ship hull maintenance robot due to changes in direction, distance, and vertical position. Furthermore, the proposed method is effective for dynamic workspaces since it performs online path planning. These are the major contributions made to state of the art by the work proposed in this paper. The behavior and the performance of the proposed method have been compared against state of the art through simulations considering Hornbill, a multipurpose ship hull maintenance robot. The validation confirms the ability of the proposed in realizing a complete coverage of a given dynamic workspace. According to the statistical outcomes of the comparison, the performance of the proposed method significantly surpasses that of the state-of-the-art methods in terms of energy usage. Therefore, the proposed method contributes to the development of energy-efficient CCPP methods for a ship hull maintenance robot. • A novel coverage method based on Glasius Bioinspired Neural Network is proposed. • The proposed method is intended for a multipurpose ship hull maintenance robot. • A comprehensive energy model is utilized by the proposed method for path planning. • The proposed method is significantly efficient than the state of the art methods. • The paper contributes to the development of a ship hull maintenance robot. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Generating survey draft based on closeness of position distributions of key words.
- Author
-
Sun, Xiaoping and Zhuge, Hai
- Subjects
- *
TEXT summarization , *CURVES - Abstract
Automatically generating a survey draft is a challenge to text summarization research because it needs to select important sentences from important references in a large set of candidate papers for composing sections that are in line with section titles and different sections discuss the most relevant reference papers of different number, which are beyond the capability of previous text summarization approaches as they assume that all candidate papers should be included into one summary. This paper proposes an approach to generating survey draft according to a pattern consisting of sections with titles given by the user who requests the survey. The problem of generating each section can be divided into the following sub-problems: (1) rank the input scientific documents (in short documents) according to the title of a section, (2) determine the number of documents that are most relevant to the title, and (3) rank and select sentences from the selected documents according to the title. A position closeness distance of key word is proposed to rank a set of documents by measuring how closely two key words within section title are distributed within each document, which is used to rank the documents. The rationale is that the positions of the neighboring key words of a section title should be closer in more relevant documents than other words. As different sections have different number of selected documents, a method is proposed to determine the number of documents to be included into the current section based on the slope shape of the sorted rank curve of documents according to the section title. Based on the duality property of the closeness, ranks of sentences within a document can be directly obtained when the document is ranked according to the title of section, and both the importance and coherence of selected sentences can be reflected without extra calculation for ranking sentences. Experiments and manual evaluation show that the proposed methods achieve significant improvements compared with other approaches. The proposed approach is significant in applications as different surveys can be generated according to different patterns given by different users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Fuzzy ontology datatype learning using Datil
- Author
-
Huitzil, Ignacio and Bobillo, Fernando
- Published
- 2023
- Full Text
- View/download PDF
28. Modeling supply-chain networks with firm-to-firm wire transfers
- Author
-
Silva, Thiago Christiano, Amancio, Diego Raphael, and Tabak, Benjamin Miranda
- Published
- 2022
- Full Text
- View/download PDF
29. “Taps”: A trading approach based on deterministic sign patterns
- Author
-
Liu, Xi and Thomakos, Dimitrios D.
- Published
- 2021
- Full Text
- View/download PDF
30. Succinct contrast sets via false positive controlling with an application in clinical process redesign
- Author
-
Nguyen, Dang, Luo, Wei, Vo, Bay, and Pedrycz, Witold
- Published
- 2020
- Full Text
- View/download PDF
31. Framework for syntactic string similarity measures
- Author
-
Gali, Najlah, Mariescu-Istodor, Radu, Hostettler, Damien, and Fränti, Pasi
- Published
- 2019
- Full Text
- View/download PDF
32. Improved network intrusion classification with attention-assisted bidirectional LSTM and optimized sparse contractive autoencoders
- Author
-
Bi, Jing, Guan, Ziyue, Yuan, Haitao, and Zhang, Jia
- Published
- 2024
- Full Text
- View/download PDF
33. A novel cross-domain adaptation framework for unsupervised criminal jargon detection via pre-trained contextual embedding of darknet corpus
- Author
-
Ke, Liang, Xiao, Peng, Chen, Xinyu, Yu, Shui, Chen, Xingshu, and Wang, Haizhou
- Published
- 2024
- Full Text
- View/download PDF
34. Matching cost function analysis and disparity optimization for low-quality binocular images.
- Author
-
Hongjin, Zhang, Hui, Wei, and Huilan, Luo
- Subjects
- *
COST functions , *COST analysis , *GEOMETRIC analysis , *ENERGY function , *IMAGE analysis , *MATCHING theory - Abstract
State-of-the-art dense stereo matching algorithms have achieved excellent performance, demonstrating a capability to attain precise matching in most areas. However, current such methods rarely achieve this when images are captured under poor conditions. To improve the accuracy of the algorithm in such cases, this paper introduces a post-optimization algorithm to rectify matching errors and enhance outcomes. The main research areas of this paper include three aspects. (1) Disparities are classified into reliable and unreliable results based on the analysis of geometric matching relationships, local features in the images, and components within the matching cost function; (2) Subsequent analysis of horizontal image features identifies local characteristic indices calculated through integration along the horizontal axis, which establish specific matching criteria, forming the foundation for a cost volume that encompasses these distinct matches; (3) A redefined matching cost function is applied to previously classified unreliable results to rectify matching errors. This energy function is based on the cost volume above. Experimental results validate the efficacy of the proposed post-optimization algorithm, reducing the average matching errors from 8.66% to 5.85%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A blind signature scheme for IoV based on 2D-SCML image encryption and lattice cipher.
- Author
-
Gao, Mengli, Li, Jinqing, Di, Xiaoqiang, Li, Xusheng, and Zhang, Mingao
- Subjects
- *
IMAGE encryption , *PUBLIC key cryptography , *CIPHERS , *DISCLOSURE , *MAP design , *DATA transmission systems - Abstract
Today's Internet of Vehicles (IoV) faces many security risks in the data transmission process, and image data is more vulnerable to security threats in the transmission process due to its special characteristics such as large amounts of information and high visibility. Therefore, to guarantee the dependability of data transmission in the IoV environment, this paper designs a blind signature scheme for IoV based on two-dimensional sine cosine cross-chaotic mapping (2D-SCML) image encryption and lattice cipher (BSS-IoV). The innovation of this scheme is that it aims at blind signature of image information, blinds it before sending the information, and combines the lattice public key encryption algorithm to better ensure the safe and reliable transmission of information and reduce the risk of information disclosure. To further ensure the security of the scheme, an image encryption algorithm based on 2D-SCML and pixel splitting (2PS-IEA) is proposed, which is used to blind the information and thus reduce the risk of information leakage on the one hand, on the other hand, it is used in the signature process to ensure the security of the signed information. The 2D-SCML is derived from the cross-model structure proposed in this paper. Through simulation results and experimental analysis, the values of NPCR and UACI, respectively, 99.6094% and 33.4635%, are close to ideal values. And 50% of the cut image can also recover the rough information, which indicates that the signature scheme has the security against differential attacks, cut attacks and noise attacks. Moreover, the security analysis shows that the scheme has the anti-tamper, anti-repudiation and traceability. • Designed blind signature scheme for IoV using image encryption and lattice cipher. • Proposed a crossover model structure is proposed. • Designed a 2D-SCML mapping with superior performance based on this model. • Devised an image encryption algorithm based on 2D-SCML and pixel splitting. • The signature scheme and encryption scheme are analyzed experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Bandit algorithms: A comprehensive review and their dynamic selection from a portfolio for multicriteria top-k recommendation.
- Author
-
Letard, Alexandre, Gutowski, Nicolas, Camp, Olivier, and Amghar, Tassadit
- Subjects
- *
RECOMMENDER systems , *FUZZY sets , *ALGORITHMS , *REINFORCEMENT learning - Abstract
This paper discusses the use of portfolio approaches based on bandit algorithms to optimize multicriteria decision-making in recommender systems (accuracy and diversity). While previous research has primarily focused on single-item recommendations, this study extends the research to consider the recommendation of several items per iteration. Two methods, Multiple-play Gorthaur and Budgeted-Gorthaur, are proposed to solve the algorithm selection problem and their performances on real-world datasets are compared. Both methods provide a generalization of the Gorthaur method, which enables it to operate with any Multi-Armed Bandit (MAB) and Contextual Multi-Armed Bandit (CMAB) algorithm as meta-algorithm in a multi-item recommendation scenario. For Multiple-play Gorthaur, an empirical evaluation shows that the use of Thompson Sampling for algorithm selection (Gorthaur-TS) yields better results than the original EXP3 method (Gorthaur-EXP3) and the exclusive use of the optimal algorithm in the portfolio in contextual recommendation problems. Additionally, the paper includes a theoretical regret analysis based on the TS sketch proof applied for this variant of the method. Concerning Budgeted-Gorthaur, experiments show that it allows more flexibility to achieve a suitable trade-off between criteria and a broader coverage of the Pareto set of solutions, overcoming a natural limit of "a-priori" methods. Finally, this paper provides a detailed review, including pseudocodes and theoretical bounds, for all the fundamental MAB and CMAB algorithms used in this study. • Bandit literature lacks formal algorithm review, hindering clarity and comparability. • There is no silver bullet: no algorithm can be the best performer in every instance. • Recommender systems need to balance accuracy, diversity, multi-item recommendations. • Optimal algorithm balances criteria, matching decision maker's preferred trade-off. • Dynamic selection ensures safe performance when optimal algorithm is unknown. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Human activity recognition with smartphone-integrated sensors: A survey.
- Author
-
Dentamaro, Vincenzo, Gattulli, Vincenzo, Impedovo, Donato, and Manca, Fabio
- Subjects
- *
HUMAN activity recognition , *MACHINE learning , *FEATURE selection , *FEATURE extraction , *DETECTORS - Abstract
• Newbie study using standard ML techniques with HAR Application and Discussions. • Activities found in Literature with the corresponding reference. • Co-occurrences between activities and sensors with the corresponding reference. • Summary and comparison among the different datasets found in Literature. • Summary of the experimentation settings with performance scores found in Literature. Human Activity Recognition (HAR) is an essential area of research related to the ability of smartphones to retrieve information through embedded sensors and recognize the activity that humans are performing. Researchers have recognized people's activities by processing the data received from the sensors with Machine Learning Models. This work is intended to be a hands-on survey with practical's tables capable of guiding the reader through the sensors used in modern smartphones and highly cited developed machine learning models that perform human activity recognition. Several papers in the literature have been studied, paying attention to the preprocessing, feature extraction, feature selection, and classification techniques of the HAR system. In addition, several summary tables illustrating HAR approaches have been provided: most popular human activities in the literature with paper references, the most popular datasets available for download (Analyzing their characteristics, such as the number of subjects involved, the activities recorded, and the sensors with online-availability), co-occurrences between activities and sensors, and a summary table showing the performance obtained by researchers. =The paper's goal is to recommend, through the discussion phase and thanks to the tables, the current state of the art on this topic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A new community detection method for simplified networks by combining structure and attribute information.
- Author
-
Cai, Jianghui, Hao, Jing, Yang, Haifeng, Yang, Yuqing, Zhao, Xujun, Xun, Yaling, and Zhang, Dongchao
- Subjects
- *
DENSITY - Abstract
Complex networks have a large number of nodes and edges, which prevents the understanding of network structure and the discovery of valid information. This paper proposes a new community detection method for simplified networks. First, a similarity measure is defined, the path and attribute information can reflect the potential relationship between nodes that are not directly connected. Based on the defined similarity, an Importance Score(IS) is constructed to show the importance of each node, it reflects the density around each node. Then, the simplification processes can be realized on complex networks. On the simplified network, this paper proposes a novel community detection method, in which the community structure of the simplified network is detected. The experiments were conducted on real networks and compared with several widely used methods. The experimental results illustrate that the proposed method is more advantageous and can visually and effectively uncover the community structure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Convolutional neural networks for quality and species sorting of roundwood with image and numerical data.
- Author
-
Achatz, Julia, Lukovic, Mirko, Hilt, Simon, Lädrach, Thomas, and Schubert, Mark
- Subjects
- *
CONVOLUTIONAL neural networks , *CROSS-sectional imaging , *RECOMMENDER systems , *IMAGE recognition (Computer vision) , *FEATURE selection , *HUMAN error , *MACHINE learning - Abstract
Roundwood sorting is still a manual process in many Swiss sawmills, requiring employees to visually inspect and categorize thousands of logs per day. The heavy workload can be both physically and mentally taxing and can lead to increased rates of human error. State-of-the-art automation systems like X-ray log scanners are expensive and difficult to integrate into existing process lines. This paper proposes a novel recommendation system that leverages recent advances in image classification to automate roundwood classification by quality and species. The system integrates a camera to capture cross-sectional images of logs and record numerical data, such as length, taper, and diameter. The analysis of the resulting dataset highlights the challenges of data imbalance and noise, which makes classification difficult and, in some cases, impossible. However, by using selected datasets with reduced noise, state-of-the-art Convolutional Neural Networks (CNNs) can extract quality and species features. Quality models learn from a manually selected and simplified dataset, featuring samples that experts can clearly classify based on the image's information. Species models are trained on a label-noise-reduced dataset, reflecting real-world complexity. The accuracy on the selected dataset for three quality classes is 80%. The species determination is less challenging and reaches 91% accuracy on a synchronized dataset for the main species spruce and fir. Overall, this paper highlights the potential of Machine Learning in augmenting the roundwood sorting processes and presents a novel system that can improve the efficiency and accuracy of the process. [Display omitted] • Automation of roundwood sorting: Replaces manual sorting with image-based AI. • Integrated camera system in roundwood sorting to collect labeled dataset. • Species prediction: 91% accuracy in spruce–fir distinction. • Quality prediction on complexity reduced dataset: 80% accuracy between three main quality levels. • Efficient, adaptable & scalable system which is easy to integrate into existing process lines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A malware detection model based on imbalanced heterogeneous graph embeddings.
- Author
-
Li, Tun, Luo, Ya, Wan, Xin, Li, Qian, Liu, Qilie, Wang, Rong, Jia, Chaolong, and Xiao, Yunpeng
- Subjects
- *
MALWARE , *GENERATIVE adversarial networks , *COMPUTER security , *COMPUTER software industry , *CLASSIFICATION algorithms , *INFORMATION networks - Abstract
The proliferation of malware in recent years has posed a significant threat to the security of computers and mobile devices. Detecting malware, especially on the Android platform, has become a growing concern for researchers and the software industry. This paper proposes a new method for detecting Android malware based on unbalanced heterogeneous graph embedding. First of all, most malware datasets contain an imbalance of malicious and benign samples, since some types of malware are scarce and difficult to collect. Thus, as a result of this problem, the classification algorithm is unable to analyze the minority samples through sufficient data, resulting in poor downstream classifier performance, in light of the fact that adversarial generation networks possess the characteristic of completing data, an algorithm for generating graph structure data is presented, in which nodes are generated to simulate the distribution of minority nodes within a network topology. Then, considering that heterogeneous information networks have the characteristics of retaining rich node semantic features and mining implicit relationships, heterogeneous graphs are used to construct models for different types of entities (i.e. Apps, APIs, permissions, intents, etc.) and different meta-paths. Finally, a new method is introduced to alleviate the over-smoothing phenomenon of node information in the propagation of deep network. In the deep GCN, we first sample the leader nodes of each layer node, and then add a residual connection and an identity map in order to determine the characteristics of the high-order leader. In this paper, a self-attention-based semantic fusion method is also applied to adaptively fuse embedded representations of software nodes under different meta-paths. The test results demonstrate that the proposed IHODroid model effectively detects malicious software. In the DREBIN dataset, which consists of 123,453 Android applications and 5,560 malicious samples, the IHODroid model achieves an accuracy of 0.9360 and an F1 score of 0.9360, outperforming other state-of-the-art baseline methods. • A new generative adversarial network model has been proposed for balancing data. • Heterogeneous graphs are used for modeling malware detection. • A new method is introduced to alleviate the over-smoothing phenomenon. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. MV-Checker: A software tool for multi-valued model checking intelligent applications with trust and commitment.
- Author
-
Alwhishi, Ghalya, Bentahar, Jamal, Elwhishi, Ahmed, and Pedrycz, Witold
- Subjects
- *
ARTIFICIAL intelligence , *PROPOSITION (Logic) , *BLOCKCHAINS - Abstract
Intelligent applications are highly susceptible to uncertainty and inconsistency due to the intense and intricate interactions among their autonomous components (or agents), making their verification theoretically and practically challenging. This paper presents the design and implementation of a new open-source and scalable software tool for modeling and verifying intelligent applications with commitment and trust protocols under both uncertainty and inconsistency settings, using reduction-based multi-valued model checking techniques. The proposed tool is equipped with original and novel algorithms that transform our logics of multi-valued commitment (mv-CTLC) and multi-value trust (mv-TCTL) that we recently introduced to their classical two-valued commitment (CTLC) and trust (TCTL) logic versions as well as to Computational Tree Logic (CTL). Moreover, the tool transforms the mv-CTL to CTL, and it is applicable for the classical model checking by transforming the classical logics of trust and commitment to CTL. To demonstrate the practicality and applicability of the proposed tool in real settings, we present and report experimental results over two blockchain-based applications in the healthcare domain. Finally, we provide discussions and comparisons between the proposed approaches regarding scalability and efficiency. Moreover, we provide packages of more than 11 experiments, including the ones we conduct in this paper and enhanced experiments from previous works. Our findings ensure that the proposed approaches and the software tool that implements them are highly efficient and scalable, giving accurate results under varying conditions. • Design and implementation of an open-source scalable tool for systems model checking. • System modeling with multi-valued logic capturing uncertainty and inconsistency. • Practical demonstration using blockchain-based applications in healthcare domain. • Extensive experiments showing scalability, efficiency and reliability of the tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Conceptual clustering with application on FCA context.
- Author
-
Kovács, László
- Subjects
- *
K-means clustering - Abstract
Conceptual clustering is one of the key approaches for automatic concept generation from input contexts. In this paper, we propose an extension of the dominating k-means method. The proposed method introduces a flexible distance metric that enables the approximation of both Euclidean and meet(or join)-based similarity calculations. To increase the approximation accuracy, the method combines the k-means method with a Quality Threshold component. The paper shows that the method can also be used for approximation of formal concept lattices. Based on the performed tests, the method provides an efficient alternative conceptual clustering approach. • Novel extension of the k-means-based conceptual clustering algorithm. • Novel parameterized distance metric to cover different aggregation approaches. • Tool for flexible concept set reduction in formal concept analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Nataf-KernelDensity-Spline-based point estimate method for handling wind power correlation in probabilistic load flow.
- Author
-
Shaik, Mahmmadsufiyan, Gaonkar, Dattatraya N., Nuvvula, Ramakrishna S.S., Muyeen, S.M., Shezan, Sk. A., and Shafiullah, G.M.
- Subjects
- *
WIND power , *PROBABILITY density function , *MONTE Carlo method , *SOLAR energy , *RENEWABLE energy sources , *WIND speed - Abstract
Modern power systems integrated with renewable energies (REs) contain many uncertainties. The proposed method introduces a novel approach to address the challenges associated with wind power generation uncertainty in probabilistic load flow (PLF) studies. Unlike conventional methods that use wind speed as an input, the paper advocates for utilizing wind generator output power (WGOP) as an input to the point estimate method (PEM) in solving PLF. The uniqueness lies in recognizing the distinct behavior of wind power uncertainty, where not all random samples of wind speed contribute to actual wind power production. The paper suggests a Nataf-KernelDensity-Spline-based PEM, combining the Nataf transformation, Kernel density estimation (KDE), and cubic spline interpolation. This innovative integration effectively manages wind power correlation within the analytical framework. By incorporating spline interpolation and kernel density estimation into the traditional PEM, the proposed method significantly enhances accuracy. To validate the effectiveness of the proposed approach, the method is applied to IEEE-9 and IEEE-57 bus test systems, considering uncertainties related to load, wind power generation (WPG), solar power generation (SPG), and conventional generator (CoG) outages. Comparative analysis with Monte Carlo simulation (MCS) results demonstrates that the proposed method outperforms the conventional PEM in terms of accuracy. Overall, the paper contributes a pioneering solution that not only highlights the importance of using WGOP as an input in PLF but also introduces a sophisticated method that surpasses traditional approaches, improving accuracy in power system studies involving renewable energy integration. The accuracy of the proposed method is validated by comparing its results with those obtained through Monte Carlo simulation (MCS), where the proposed method yields more accurate results than the conventional PEM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Deep semi-supervised learning for medical image segmentation: A review.
- Author
-
Han, Kai, Sheng, Victor S., Song, Yuqing, Liu, Yi, Qiu, Chengjian, Ma, Siqi, and Liu, Zhe
- Subjects
- *
SUPERVISED learning , *DEEP learning , *IMAGE segmentation , *DIAGNOSTIC imaging , *COMPUTER vision , *IMAGE analysis - Abstract
Deep learning has recently demonstrated considerable promise for a variety of computer vision tasks. However, in many practical applications, large-scale labeled datasets are not available, which limits the deployment of deep learning. To address this problem, semi-supervised learning has attracted a lot of attention in the computer vision community, especially in the field of medical image analysis. This paper analyzes existing deep semi-supervised medical image segmentation studies and categories them into five main categories (i.e., pseudo-labeling, consistency regularization, GAN-based methods, contrastive learning-based methods, and hybrid methods). Afterward, we empirically analyze several representative methods by conducting experiments on two common datasets. Besides, we also point out several promising directions for future research. In summary, this paper provides a comprehensive introduction to deep semi-supervised medical image segmentation, aiming to provide a reference and comparison of methods for researchers in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Fusion of theory and data-driven model in hot plate rolling: A case study of rolling force prediction.
- Author
-
Dong, Zishuo, Li, Xu, Luan, Feng, Meng, Lingming, Ding, Jingguo, and Zhang, Dianhua
- Subjects
- *
HOT rolling , *ARTIFICIAL neural networks , *MODEL theory , *MANUFACTURING processes , *SEARCH algorithms - Abstract
As one of the most critical variables in the hot rolling process, the accuracy of rolling force prediction is directly associated with production stability and product quality. Purely data-driven approaches, however, are severely constrained by the quantity and quality of data, posing challenges for further enhancing the accuracy of rolling force prediction. In this paper, a theory fusion deep neural network (DNN) modelling approach was proposed and applied to the prediction of rolling force during hot plate rolling. In terms of model establishment, the novel NN structure was designed in consideration of the rolling mechanism, and senior variable inputs were added at shallow locations in the network to reduce the loss of critical information. In terms of model training, the method of using rolling theory to guide the initialization of the model was proposed to enable the model to learn the theoretical features more completely in the pre-training phase. Finally, a method to optimize the overall structure of the model using the sparrow search algorithm (SSA) was proposed to ensure the best prediction performance. The model was tested with the data in the developed platform, and the results indicated that the proposed method achieves the best accuracy and stability in this paper, and the response relationship between model inputs and output was consistent with existing theoretical knowledge. Thus, the model can be trusted and flexibly applied to the actual manufacturing processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Scheduling optimization of underground mine trackless transportation based on improved estimation of distribution algorithm.
- Author
-
Li, Ning, Wu, Yahui, Ye, Haiwang, Wang, Liguan, Wang, Qizhou, and Jia, Mingtao
- Subjects
- *
MINES & mineral resources , *DISTRIBUTION (Probability theory) , *TRANSPORTATION costs , *PARTICLE swarm optimization - Abstract
The trend in underground mine development is trackless transportation, and the scheduling optimization of underground mine trackless transportation is a current research hotspot. This paper proposes a truck scheduling optimization method for underground mine trackless transportation based on an improved estimation of distribution algorithm to address the truck scheduling problem in the underground mine trackless transportation process. The transportation process of transport trucks in underground mines is analyzed. The dispatching model of transport trucks in underground mines is constructed based on the requirements of reducing transportation costs and increasing transportation efficiencies, taking into account the truck meeting situation in the ramp section and minimizing the total shift transportation distance and the total waiting time of transport trucks as the objective functions. The improved estimation of distribution algorithm is used to solve the truck scheduling model, resulting in the optimal ore blending and scheduling schemes. The comparative analysis employs a genetic algorithm, particle swarm optimization algorithm, and immune algorithm. The results demonstrate that, compared to other algorithms, the improved estimation of distribution algorithm proposed in this paper has superior performance in terms of convergence speed and the search for the optimal solution. The total number of transportation tasks associated with the optimal ore allocation scheme is at least 82, and the waiting time associated with the optimal scheduling scheme is reduced to 7.5 min. The operation time chart of transport trucks calculated by the optimal dispatching scheme can clearly depict the location of each transport truck at any time during a shift's working time, which has significant guiding significance for the actual truck transportation in the mine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. An intrusion detection algorithm based on joint symmetric uncertainty and hyperparameter optimized fusion neural network.
- Author
-
Wang, Qian, Jiang, Haiyang, Ren, Jiadong, Liu, Han, Wang, Xuehang, and Zhang, Bing
- Subjects
- *
INTRUSION detection systems (Computer security) , *CONVOLUTIONAL neural networks , *PARTICLE swarm optimization , *FEATURE selection , *ALGORITHMS , *HUMAN fingerprints , *COMPUTER network security - Abstract
Intrusion Detection System (IDS) can ensure the network security by identifying network intrusions according to the abnormal traffic data. However, the intrusion detection data has the problem of high dimensionality and changes with network and attack environments, which leads to the poor performance and poor portability of intrusion detection algorithms. Therefore, this paper proposes an intrusion detection algorithm based on joint symmetric uncertainty and hyperparameter optimized fusion neural network. Firstly, a feature selection method based on symmetric uncertainty and approximate Markov blanket is proposed, which fully considers the correlation and redundancy of features, and also the correlation between combined features and the class label, so as to reduce the data dimensionality. Secondly, the CNN-LSTM classifier fused with Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) is used to extract the spatial features and temporal features to improve the classification performance. Finally, the Particle Swarm Optimization (PSO) algorithm is improved and used to automatically optimize the hyperparameters of the classifier, so that the classifier can be applied to different intrusion detection datasets with better generalization ability and portability. Experiments have verified the effectiveness and superiority of the proposed algorithm on multiple evaluation indicators. • An effective algorithm for intrusion detection is proposed in this paper. • Feature selection is based on symmetric uncertainty and approximate Markov blanket. • A fusion neural network is constructed to extract the spatial and temporal features. • The PSO algorithm is improved to automatically optimize the hyperparameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. EEG sensor driven assistive device for elbow and finger rehabilitation using deep learning.
- Author
-
Mukherjee, Prithwijit and Halder Roy, Anisha
- Subjects
- *
ASSISTIVE technology , *ELECTROENCEPHALOGRAPHY , *DEEP learning , *ELBOW , *REHABILITATION , *MOTOR imagery (Cognition) , *DETECTORS , *DATA recorders & recording - Abstract
[Display omitted] In today's world, a large number of people suffer from motor impairment-related challenges. Rehabilitation is the main method used to overcome these difficulties. The goal of the paper is to develop a deep learning-based electroencephalogram (EEG) sensor-controlled assistive device for the rehabilitation of elbow and finger movements. We have introduced an innovative finger and elbow movement rehabilitation method using an EEG sensor. The EEG sensor's recorded EEG signals, attention values, and meditation values have been used for this purpose. This rehabilitation technique helps a person perform basic finger movement rehabilitation motions, such as finger extension and flexion. Also, basic elbow movement rehabilitation exercises, i.e., elbow extension and elbow flexion, can be performed by using this rehabilitation technique. In this research, an EEG sensor records the prefrontal lobe's EEG signals, attention value, and meditation value of a person while the person performs motor imagery. A deep learning-based CNN-TLSTM (Convolution Neural Network-tanh Long Short-Term Memory) model with attention mechanism has been designed for decoding the EEG sensor recorded data. The trained deep learning model decides the course of action of the rehabilitation device. The designed model achieves an accuracy of 99.6%. A working prototype model of the rehabilitation device has been developed, and the overall success rate of the model is found to be 98.66%. The novelty of the paper lies in i) designing an attention-based CNN-TLSTM model for motor imagery classification and ii) developing a low-cost EEG sensor-driven rehabilitation device for finger and elbow movement rehabilitation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A novel evaluation method for renewable energy development based on improved sparrow search algorithm and projection pursuit model.
- Author
-
Leng, Ya-Jun, Zhang, Huan, and Li, Xiao-Shuang
- Subjects
- *
ENERGY development , *RENEWABLE energy sources , *SEARCH algorithms , *EVALUATION methodology , *CARBON emissions - Abstract
With global climate change posing a major threat to human society, a growing number of countries have taken "carbon-neutral" as a national strategy and proposed a vision of carbon-free future. As an important supplement to traditional fossil energy, renewable energy is the main force to reduce the use of high-carbon energy and carbon dioxide emissions, which will become the trend of social development in the future. Finding the optimal renewable energy source is of particular significance for achieving the net zero emissions. However, the existing evaluation methods of renewable energy sources have obvious shortcomings. In terms of weight calculation methods, such as the randomness of the subjective method is strong and the index weights do not reflect the small changes of the evaluation matrix, which affect the reliability and accuracy of the evaluation result. The existing ranking methods can only achieve the complete ranking of the different objects, but cannot classify the renewable energy technical alternatives into different grades. Given this background, this paper proposes a novel evaluation method for renewable energy plans based on improved sparrow search algorithm and projection pursuit model. Firstly, this paper improves the traditional sparrow search algorithm from three aspects: population initialization, population update and population variation. Then, the projection pursuit model is constructed, and the improved sparrow search algorithm is applied to optimize the projection target to find the optimal projection direction, so as to determine the weight values of each evaluation index. Finally, the weighted rank-sum ratio method is used to select the best renewable energy technical plan, which can not only realize the complete ranking of different plans, but also classify the technical plans into different levels. Based on the actual renewable energy development data from a province in China, experiments were carried out to investigate the effectiveness of the proposed method. Experimental results show that the proposed method performs better than some existing evaluation methods of renewable energy technical plans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Elastic net-based high dimensional data selection for regression.
- Author
-
Chamlal, Hasna, Benzmane, Asmaa, and Ouaderhman, Tayeb
- Subjects
- *
FEATURE selection , *RESEARCH personnel , *VITAMIN B2 , *PREDICTION models - Abstract
High-dimensional feature selection is of particular interest to researchers. In some domains, such as microarray data, it is quite common for a group of highly correlated explanatory variables to be of equal importance for inclusion in the predictive model. This paper proposes a new hybrid feature selection approach that integrates feature screening based on Kendall's tau and Elastic Net regularized regression (K -EN). K -EN as an approach that embeds the Elastic Net, has the advantage of the grouping effect, which automatically includes all the highly correlated variables in the group. The K -EN approach offers insightful solutions to high-dimensional regression problems and improves Elastic Net performance since the screening phase is preceded by a step that further reduces the number of explanatory variables by removing those that disagree with the target based on Kendall's tau. The use of Kendall's tau further enhances Elastic Net performance, as it is robust enough to handle heavy-tailed distributions, non-parametric models, outliers, and non-normal data with greater ease. K -EN is therefore a time-saving approach. The proposed algorithm is evaluated on four simulation scenarios and four publicly available datasets, including riboflavin, eyedata, Longley, and Boston Housing, and achieves 0.2528, 0.0098, 0.1007, and 0.4121 respectively as the Mean Squared Error (MSE). K -EN's MSEs are the best compared to those achieved by the state-of-the-art approaches reviewed in this paper. In addition, K -EN selects up to 100% of relevant features when run on simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.