996,088 results
Search Results
2. Construction of Digital Shared Resources in Vocational Colleges Based on the Computer Network Security Framework System of the Credit Bank
- Author
-
Ma, Shue, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Bhattacharya, Abhishek, editor, Dutta, Soumi, editor, Dutta, Paramartha, editor, and Samanta, Debabrata, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Research on the Important Role of Computers in the Digital Transformation of the Clothing Industry
- Author
-
Wang, Ping, Zhang, Xuming, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Li, Kangshun, editor, and Liu, Yong, editor
- Published
- 2024
- Full Text
- View/download PDF
4. Notable Papers and New Directions in Sensors, Signals, and Imaging Informatics
- Author
-
Hsu, William, Baumgartner, Christian, and Deserno, Thomas M
- Subjects
Networking and Information Technology R&D (NITRD) ,Biometry ,Diagnostic Imaging ,Electroencephalography ,Humans ,Machine Learning ,Medical Informatics ,Neural Networks ,Computer ,Reproducibility of Results ,Section Editors of the IMIA Yearbook Section on Sensors ,Signals ,and Imaging Informatics ,Biochemistry and Cell Biology ,Library and Information Studies ,Public Health and Health Services - Abstract
ObjectiveTo identify and highlight research papers representing noteworthy developments in signals, sensors, and imaging informatics in 2020.MethodA broad literature search was conducted on PubMed and Scopus databases. We combined Medical Subject Heading (MeSH) terms and keywords to construct particular queries for sensors, signals, and image informatics. We only considered papers that have been published in journals providing at least three articles in the query response. Section editors then independently reviewed the titles and abstracts of preselected papers assessed on a three-point Likert scale. Papers were rated from 1 (do not include) to 3 (should be included) for each topical area (sensors, signals, and imaging informatics) and those with an average score of 2 or above were subsequently read and assessed again by two of the three co-editors. Finally, the top 14 papers with the highest combined scores were considered based on consensus.ResultsThe search for papers was executed in January 2021. After removing duplicates and conference proceedings, the query returned a set of 101, 193, and 529 papers for sensors, signals, and imaging informatics, respectively. We filtered out journals that had less than three papers in the query results, reducing the number of papers to 41, 117, and 333, respectively. From these, the co-editors identified 22 candidate papers with more than 2 Likert points on average, from which 14 candidate best papers were nominated after intensive discussion. At least five external reviewers then rated the remaining papers. The four finalist papers were found using the composite rating of all external reviewers. These best papers were approved by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board.ConclusionsSensors, signals, and imaging informatics is a dynamic field of intense research. The four best papers represent advanced approaches for combining, processing, modeling, and analyzing heterogeneous sensor and imaging data. The selected papers demonstrate the combination and fusion of multiple sensors and sensor networks using electrocardiogram (ECG), electroencephalogram (EEG), or photoplethysmogram (PPG) with advanced data processing, deep and machine learning techniques, and present image processing modalities beyond state-of-the-art that significantly support and further improve medical decision making.
- Published
- 2021
5. Effectiveness of Deep Learning Based Filtering Algorithm in Separation of Human Objects from Images
- Author
-
Khalilov, S. P., Yusupov, I., Mannapova, M. G., Nasrullayev, N. B., Botirov, F., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zaynidinov, Hakimjon, editor, Singh, Madhusudan, editor, Tiwary, Uma Shanker, editor, and Singh, Dhananjay, editor
- Published
- 2023
- Full Text
- View/download PDF
6. Notable Papers and Trends from 2019 in Sensors, Signals, and Imaging Informatics
- Author
-
Hsu, William, Baumgartner, Christian, and Deserno, Thomas M
- Subjects
Deep Learning ,Diagnostic Imaging ,Humans ,Medical Informatics ,Neural Networks ,Computer ,Signal Processing ,Computer-Assisted ,Section Editors for the IMIA Yearbook Section on Sensors ,Signals ,and Imaging Informatics ,Biochemistry and Cell Biology ,Library and Information Studies ,Public Health and Health Services - Abstract
ObjectiveTo highlight noteworthy papers that are representative of 2019 developments in the fields of sensors, signals, and imaging informatics.MethodA broad literature search was conducted in January 2020 using PubMed. Separate predefined queries were created for sensors/signals and imaging informatics using a combination of Medical Subject Heading (MeSH) terms and keywords. Section editors reviewed the titles and abstracts of both sets of results. Papers were assessed on a three-point Likert scale by two co-editors, rated from 3 (do not include) to 1 (should be included). Papers with an average score of 2 or less were then read by all three section editors, and the group nominated top papers based on consensus. These candidate best papers were then rated by at least six external reviewers.ResultsThe query related to signals and sensors returned a set of 255 papers from 140 unique journals. The imaging informatics query returned a set of 3,262 papers from 870 unique journals. Based on titles and abstracts, the section co-editors jointly filtered the list down to 50 papers from which 15 candidate best papers were nominated after discussion. A composite rating after review determined four papers which were then approved by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board. These best papers represent different international groups and journals.ConclusionsThe four best papers represent state-of-the-art approaches for processing, combining, and analyzing heterogeneous sensor and imaging data. These papers demonstrate the use of advanced machine learning techniques to improve comparisons between images acquired at different time points, fuse information from multiple sensors, and translate images from one modality to another.
- Published
- 2020
7. Management of Incidental Thyroid Nodules on Chest CT: Using Natural Language Processing to Assess White Paper Adherence and Track Patient Outcomes
- Author
-
Benjamin Wildman-Tobriner, Steven Dondlinger, and Ryan G. Short
- Subjects
Thyroid nodules ,Chest ct ,Thyroid ultrasound ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,White paper ,Chart review ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Thyroid Nodule ,Natural Language Processing ,Retrospective Studies ,Incidental Findings ,business.industry ,Ultrasound ,Nodule (medicine) ,medicine.disease ,030220 oncology & carcinogenesis ,Artificial intelligence ,medicine.symptom ,Tomography, X-Ray Computed ,business ,computer ,Natural language processing - Abstract
Objective The purpose of this study was to develop a natural language processing (NLP) pipeline to identify incidental thyroid nodules (ITNs) meeting criteria for sonographic follow-up and to assess both adherence rates to white paper recommendations and downstream outcomes related to these incidental findings. Methods 21583 non-contrast chest CT reports from 2017 and 2018 were retrospectively evaluated to identify reports which included either an explicit recommendation for thyroid ultrasound, a description of a nodule ≥ 1.5 cm, or description of a nodule with suspicious features. Reports from 2018 were used to train an NLP algorithm called fastText for automated identification of such reports. Algorithm performance was then evaluated on the 2017 reports. Next, any patient from 2017 with a report meeting criteria for ultrasound follow-up was further evaluated with manual chart review to determine follow-up adherence rates and nodule-related outcomes. Results NLP identified reports with ITNs meeting criteria for sonographic follow-up with an accuracy of 96.5% (95% CI 96.2-96.7) and sensitivity of 92.1% (95% CI 89.8-94.3). In 10006 chest CTs from 2017, ITN follow-up ultrasound was indicated according to white paper criteria in 81 patients (0.8%), explicitly recommended in 46.9% (38/81) of patients, and obtained in less than half of patients in which it was appropriately recommended (17/35, 48.6%). Discussion NLP accurately identified chest CT reports meeting criteria for ITN ultrasound follow-up. Radiologist adherence to white paper guidelines and subsequent referrer adherence to radiologist recommendations showed room for improvement.
- Published
- 2022
8. The legal content of a white paper for an ICO (initial coins offering)
- Author
-
Sergey Kasatkin
- Subjects
White (horse) ,Smart contract ,Communication ,media_common.quotation_subject ,Advertising ,computer.file_format ,Certainty ,Security token ,Computer Science Applications ,White paper ,ICO ,Business ,Content (Freudian dream analysis) ,Law ,computer ,media_common - Abstract
Clear legal content of white papers provides certainty, stability and trust in the relations between all the participants in initial coins offering (ICO) procedures and contributes to the successfu...
- Published
- 2021
9. Design and Construction of Zana Robot for Modeling Human Player in Rock-paper-scissors Game using Multilayer Perceptron, Radial basis Functions and Markov Algorithms
- Author
-
Peshawa Jammal Muhammad Ali, Abdolreza Roshani, Maryam Ghasemi, Ehsan Nazemi, Farhad F. Nia, and Gholam Hossein Roshani
- Subjects
Paper ,Technology ,Computer science ,Science ,Markov model ,upgraded Markov model ,Radial basis functions ,Software ,Multilayer perceptron ,Scissors game ,MATLAB ,General Environmental Science ,computer.programming_language ,Graphical user interface ,Computer. Automation ,Artificial neural network ,Markov chain ,business.industry ,Agriculture ,Rock ,General Earth and Planetary Sciences ,Robot ,business ,Engineering sciences. Technology ,computer ,Algorithm - Abstract
In this paper, the implementation of artificial neural networks (multilayer perceptron [MLP] and radial base functions [RBF]) and the upgraded Markov chain model have been studied and performed to identify the human behavior patterns during rock, paper, and scissors game. The main motivation of this research is the design and construction of an intelligent robot with the ability to defeat a human opponent. MATLAB software has been used to implement intelligent algorithms. After implementing the algorithms, their effectiveness in detecting human behavior pattern has been investigated. To ensure the ideal performance of the implemented model, each player played with the desired algorithms in three different stages. The results showed that the percentage of winning computer with MLP and RBF neural networks and upgraded Markov model, on average in men and women is 59%, 76.66%, and 75%, respectively. Obtained results clearly indicate a very good performance of the RBF neural network and the upgraded Markov model in the mental modeling of the human opponent in the game of rock, paper, and scissors. In the end, the designed game has been employed in both hardware and software which include the Zana intelligent robot and a digital version with a graphical user interface design on the stand. To the best knowledge of the authors, the precision of novel presented method for determining human behavior patterns was the highest precision among all of the previous studies.
- Published
- 2021
10. Towards development of a system for automatic assessment of the quality of a question paper
- Author
-
Sujan Kumar Saha
- Subjects
Question assessment ,Higher education ,Computer science ,media_common.quotation_subject ,Question difficulty ,Question paper quality ,02 engineering and technology ,computer.software_genre ,Education ,Domain (software engineering) ,Development (topology) ,Educational assessment ,0202 electrical engineering, electronic engineering, information engineering ,Question relevance ,Quality (business) ,Relevance (information retrieval) ,Set (psychology) ,media_common ,lcsh:LC8-6691 ,lcsh:Special aspects of education ,business.industry ,05 social sciences ,050301 education ,Computer Science Applications ,Risk analysis (engineering) ,Quality Score ,020201 artificial intelligence & image processing ,business ,0503 education ,computer - Abstract
In this paper, we present a system for automatic evaluation of the quality of a question paper. Question paper plays a major role in educational assessment. The quality of a question paper is crucial to fulfilling the purpose of the assessment. In many education sectors, question papers are prepared manually. A prior analysis of a question paper might help in finding the errors in the question paper, and better achieving the goals of the assessment. In this experiment, we focus on higher education in the technical domain. First, we conducted a student survey to identify the key factors that affect the quality of a question paper. The top factors we identified are question relevance, question difficulty, and time requirement. We explored the strategies to handle these factors and implemented them. We employ various concepts and techniques for the implementation. The system finally assigns a numerical quality score against these factors. The system is evaluated using a set of question papers collected from various sources. The experimental results show that the proposed system is quite promising.
- Published
- 2021
11. Potential for energy conservation: A portable desktop paper reusing system for office waste paper
- Author
-
Zutao Zhang, Yajia Pan, Yanping Yuan, Tingsheng Zhang, and Liu Xinglong
- Subjects
Computer science ,Transportation ,Reuse ,computer.software_genre ,Grayscale ,lcsh:TD1-1066 ,lcsh:TH1-9745 ,Compensation (engineering) ,Save energy and resources ,Font ,lcsh:Environmental technology. Sanitary engineering ,Civil and Structural Engineering ,Database ,Renewable Energy, Sustainability and the Environment ,business.industry ,Paper reusing system ,Greyscale sensor ,Schematic ,Building and Construction ,Font area detection ,Environmentally friendly ,Renewable energy ,Energy conservation ,business ,computer ,lcsh:Building construction - Abstract
Renewable paper reusing plays a significant role in the sustainable environment under the background of the shortage in forest resources and the pollution from the paper industry. The conventional reusing stream of waste office paper appears to have low reusing rates while consuming massive amounts of energy in intermediate steps. In this study, we developed a novel portable renewable desktop paper reusing system based on font area detection and greyscale sensor. The proposed system consists of two main parts, namely, a greyscale sensor and font area detection model and a polishing mechanism. Acting as an ink mark detector for waste desktop paper, the greyscale sensor and font area detection model can detect the font in the waste desktop paper using an adaptive dynamic compensation schematic. The polishing mechanism will grind the font area of the wasted desktop paper, and this paper reusing processing is non-chemical, energy saving and environmentally friendly. The proposed system is demonstrated through simulations and experimental results, which show that the proposed renewable desktop paper reusing system is portable and is effective for reusing waste office paper in the office. An accuracy of 99.78% is demonstrated in the greyscale sensor and font area detection model, and the average reuse rate of one piece of paper is 2.52 times, verifying that the proposed portable system is effective and practical in renewable desktop paper reusing applications.
- Published
- 2020
12. Modeling-Guided Design of Paper Microfluidic Networks: A Case Study of Sequential Fluid Delivery
- Author
-
Dharitri Rath and Bhushan J. Toley
- Subjects
Paper ,Optimal design ,Reverse engineering ,Computer science ,Microfluidics ,Bioengineering ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Lab-On-A-Chip Devices ,Instrumentation ,Immunoassay ,Fluid Flow and Transfer Processes ,Mathematical model ,Process Chemistry and Technology ,010401 analytical chemistry ,Control engineering ,Microfluidic Analytical Techniques ,021001 nanoscience & nanotechnology ,Trial and error ,0104 chemical sciences ,Flow (mathematics) ,Richards equation ,0210 nano-technology ,Convection–diffusion equation ,computer - Abstract
Paper-based microfluidic devices are popular for their ability to automate multistep assays for chemical or biological sensing at a low cost, but the design of paper microfluidic networks has largely relied on experimental trial and error. A few mathematical models of flow through paper microfluidic devices have been developed and have succeeded in explaining experimental flow behavior. However, the reverse engineering problem of designing complex paper networks guided by appropriate mathematical models is largely unsolved. In this article, we demonstrate that a two-dimensional paper network (2DPN) designed to sequentially deliver three fluids to a test zone on the device can be computationally designed and experimentally implemented without experimental trial and error. This was accomplished by three new developments in modeling flow through paper networks: (i) coupling of the Richards equation of flow through porous media to the species transport equation, (ii) modeling flow through assemblies of multiple paper materials (test membrane and wicking pad), and (iii) incorporating limited-volume fluid sources. We demonstrate the application of this model in the optimal design of a paper-based signal-enhanced immunoassay for a malaria protein, PfHRP2. This work lays the foundation for the development of a computational design toolbox to aid in the design of paper microfluidic networks. ©
- Published
- 2020
13. Smartphone-based application vs paper-based record: female adolescents acceptance on fluid record tool
- Author
-
Izka Sofiyya Wahyurin, Izzati Nur Khoiriani, Hiya Alfi Rahmah, and Pramesthi Widya Hapsari
- Subjects
Nutrition and Dietetics ,Multimedia ,Computer science ,Paper based ,computer.software_genre ,computer ,Food Science - Abstract
Introduction: Water is essential for normal functioning of the human body. Total fluid intake assessment using fluid record method is considered to be a burden for respondents, and the development of technology is expected to contribute favourably to this issue. My Fluid Diary is a smartphone-based application developed by the researcher as a fluid intake recording tool. This study aimed to evaluate the acceptance of manual, paper-based fluid intake recording compared to using My Fluid Diary as a trial among Indonesian vocational female students. Methods: A qualitative study was conducted to explore students’ acceptance of fluid intake recording using the smartphone-based application. An exploratory case study approach involving 38 female students as key informants was used via focus group discussion and in-depth interview as a method of triangulation. Results: Based on the data, female adolescents admitted that the application was more acceptable for fluid intake than recording manually using a book, in consideration of three aspects - the benefits, the easiness, and the application display or features. Based on its benefits, My Fluid Diary was mentioned as easy to learn and use. However, in order to improve the application, there is still a need for research development. Conclusion: My Fluid Diary was an application with respectable acceptance for fluid record compared to the manual, paper-based method among female adolescents.
- Published
- 2020
14. Human-Computer Cloud for Smart Cities: Tourist Itinerary Planning Case Study
- Author
-
Smirnov, Alexander, Ponomarev, Andrew, Teslya, Nikolay, Shilov, Nikolay, van der Aalst, Wil M.P., Series editor, Mylopoulos, John, Series editor, Rosemann, Michael, Series editor, Shaw, Michael J., Series editor, Szyperski, Clemens, Series editor, and Abramowicz, Witold, editor
- Published
- 2017
- Full Text
- View/download PDF
15. Nanocellulose reinforcement in paper produced from fiber blending
- Author
-
Matheus Felipe Freire Pego, Maria Lucia Bianchi, and Patrícia Kaji Yasumura
- Subjects
040101 forestry ,0106 biological sciences ,Grammage ,Materials science ,Opacity ,Pulp (paper) ,Forestry ,04 agricultural and veterinary sciences ,Plant Science ,Permeance ,engineering.material ,01 natural sciences ,Industrial and Manufacturing Engineering ,Nanocellulose ,Coating ,010608 biotechnology ,Ultimate tensile strength ,engineering ,0401 agriculture, forestry, and fisheries ,General Materials Science ,Composite material ,computer ,SISAL ,computer.programming_language - Abstract
This study aimed to evaluate the effect of nanocellulose addition on the physical–mechanical properties of the paper produced from different fiber blends, besides comparing two nanocellulose addition methods. Three different fibers were used for fiber blending (eucalyptus, sisal, and pine). Handsheets were formed based on the mixing of all possible combinations at a 45/55 ratio in 2% consistency and 60 g/m2. Handsheet reinforcements were performed by two methods: The mixture method (MT) was a mixture of nanocellulose along with pulp during paper formation in 3, 5, and 10% addition; the coating method (CT) was the superficial coating of dry formed papers in 10% addition. Nanocellulose was produced by mechanical microfibrillation of sisal pulp. Handsheets were evaluated by physical and strength properties. Nanocellulose addition increased thickness, volume, grammage, apparent density, opacity, roughness, tensile strength, tensile index, stretch, bursting index, tear index, and fold endurance by 8.7, 8.8, 10.4, 2.1, 4.1, 23.2, 45.7, 31.8, 20.1, 14.2, 21.1, and 271.6% but reduced bulk, brightness, and air permeance by 1.9, 3.4, and 71.7%, respectively. The reinforcement methods presented distinct results. In physical properties, an increasing tendency toward nanocellulose (MT) increase was observed in thickness, grammage, and apparent density despite the decreasing trend in air permeance. No tendency was observed in other physical properties. In general, CT presented higher values of thickness, grammage, bulk, and brightness but lower values of apparent density and opacity, compared to MT. The mixture method showed an increasing tendency in strength properties with the increase of nanocellulose content. CT obtained fewer strength properties compared to MT.
- Published
- 2020
16. A Tool for Comparing Mathematics Tasks from Paper-Based and Digital Environments
- Author
-
Alice Lemmo
- Subjects
General Mathematics ,Comparability ,Paper based ,computer.software_genre ,Science education ,Mathematics education ,Computer-based assessment ,Education ,Task design ,Comparative study, Computer-based assessment, Mathematics education, Task analysis, Task design ,Human–computer interaction ,Educational assessment ,Task analysis ,Statistical analysis ,Comparative study ,computer ,Pencil (mathematics) - Abstract
Comparative studies on paper and pencil– and computer-based tests principally focus on statistical analysis of students’ performances. In educational assessment, comparing students’ performance (in terms of right or wrong results) does not imply a comparison of problem-solving processes followed by students. In this paper, we present a theoretical tool for task analysis that allows us to highlight how students’ problem-solving processes could change in switching from paper to computer format and how these changes could be affected by the use of one environment rather than another. In particular, the aim of our study lies in identifying a set of indexes to highlight possible consequences that specific changes in task formulation have, in terms of task comparability. Therefore, we propose an example of the use of the tool for comparing paper-based and computer-based tasks.
- Published
- 2020
17. Legibility of prints on paper made from Japanese knotweed
- Author
-
Barbara Blaznik, Klemen Možina, Dorotea Kovačević, Sabina Bračko, and Klementina Možina
- Subjects
0106 biological sciences ,Environmental Engineering ,Computer science ,business.industry ,Colorimetric properties ,Inkjet printing ,Invasive alien plant species ,Japanese knotweed paper ,Legibility ,Typography ,Bioengineering ,Substrate (printing) ,computer.software_genre ,01 natural sciences ,Print permanence ,010608 biotechnology ,Typeface ,Plant species ,Artificial intelligence ,business ,Waste Management and Disposal ,computer ,Stroke width ,Natural language processing - Abstract
The spread of invasive alien plant species (IAPS) is a leading reason for worldwide environmental change due to their effects on biodiversity and humans. Some valued goods from IAPS have been produced, e.g. paper that consists of cellulose fibres from Japanese knotweed. Therefore, the aim of this study was to establish the usability of this paper grade as a printing substrate, since it does not have ideal optical properties as it is expected from commercial office paper. Because it is widely used, inkjet printing technology was employed. Print permanence is essential, especially when printing documents. However, typographic characteristics must be considered to make a text more legible. Two widely used typefaces (Arial and Times) were tested in three commonly used type sizes (8 pt, 10 pt, and 12 pt). The results showed that the paper made from Japanese knotweed could have valuable properties and suitable legibility, especially when using typefaces with a moderate counter size, high x-height, and minimal differences in the letter stroke width to obtain an appropriate typographic tonal density with an adequate type size. Even after exposure to light, the texts printed in a proper type size and stroke width remained visible.
- Published
- 2020
18. Predicting Breast Cancer by Paper Spray Ion Mobility Spectrometry Mass Spectrometry and Machine Learning
- Author
-
Ewelina P. Dutkiewicz, Chih-Lin Chen, Hua-Yi Hsieh, Cheng-Chih Hsu, Ying-Chen Huang, Ming-Yang Wang, Hsin-Hsiang Chung, and Bo-Rong Chen
- Subjects
Paper ,Core needle ,Spectrometry, Mass, Electrospray Ionization ,Ion-mobility spectrometry ,Electrospray ionization ,Breast Neoplasms ,010402 general chemistry ,Machine learning ,computer.software_genre ,Mass spectrometry ,01 natural sciences ,Analytical Chemistry ,Machine Learning ,Breast cancer ,Ion Mobility Spectrometry ,medicine ,Humans ,business.industry ,Chemistry ,010401 analytical chemistry ,medicine.disease ,Mass spectrometric ,0104 chemical sciences ,Ion-mobility spectrometry–mass spectrometry ,Female ,Artificial intelligence ,Asymmetric waveform ,business ,computer ,Algorithms - Abstract
Paper spray ionization has been used as a fast sampling/ionization method for the direct mass spectrometric analysis of biological samples at ambient conditions. Here, we demonstrated that by utilizing paper spray ionization-mass spectrometry (PSI-MS) coupled with field asymmetric waveform ion mobility spectrometry (FAIMS), predictive metabolic and lipidomic profiles of routine breast core needle biopsies could be obtained effectively. By the combination of machine learning algorithms and pathological examination reports, we developed a classification model, which has an overall accuracy of 87.5% for an instantaneous differentiation between cancerous and noncancerous breast tissues utilizing metabolic and lipidomic profiles. Our results suggested that paper spray ionization-ion mobility spectrometry-mass spectrometry (PSI-IMS-MS) is a powerful approach for rapid breast cancer diagnosis based on altered metabolic and lipidomic profiles.
- Published
- 2019
19. Engagement in PC-based, smartphone-based, and paper-based materials: Learning vocabulary through Chinese Stories
- Author
-
Yijen Wang
- Subjects
Embryology ,Vocabulary ,Multimedia ,Computer science ,media_common.quotation_subject ,Cell Biology ,Paper based ,Anatomy ,computer.software_genre ,computer ,Developmental Biology ,media_common - Published
- 2020
20. Comparison of three filter paper-based devices for safety and stability of viral sample collection in poultry
- Author
-
Suwarak Wannaratana, Aunyaratana Thontiravong, and Somsak Pakpinyo
- Subjects
General Immunology and Microbiology ,Food Animals ,Filter paper ,DNA stability ,viruses ,Stability (learning theory) ,Animal Science and Zoology ,Sample collection ,Data mining ,Biology ,computer.software_genre ,computer - Abstract
General diagnosis of poultry viruses primarily relies on detection of viruses in samples, but many farms are located in remote areas requiring logistic transportation. Filter paper cards are a usef...
- Published
- 2020
21. Comparing LSTM and GRU Models to Predict the Condition of a Pulp Paper Press
- Author
-
Antonio J. Marques Cardoso, Rui Assis, Balduíno César Mateus, Mateus Mendes, and José Torres Farinha
- Subjects
Technology ,Multivariate statistics ,Control and Optimization ,Computer science ,GRU ,Energy Engineering and Power Technology ,Machine learning ,computer.software_genre ,Predictive maintenance ,predictive maintenance ,LSTM ,recurrent neural network ,paper press ,Autoregressive integrated moving average ,Electrical and Electronic Engineering ,Engineering (miscellaneous) ,Hyperparameter ,Artificial neural network ,Renewable Energy, Sustainability and the Environment ,business.industry ,Univariate ,Statistical model ,Recurrent neural network ,Artificial intelligence ,business ,computer ,Energy (miscellaneous) - Abstract
The accuracy of a predictive system is critical for predictive maintenance and to support the right decisions at the right times. Statistical models, such as ARIMA and SARIMA, are unable to describe the stochastic nature of the data. Neural networks, such as long short-term memory (LSTM) and the gated recurrent unit (GRU), are good predictors for univariate and multivariate data. The present paper describes a case study where the performances of long short-term memory and gated recurrent units are compared, based on different hyperparameters. In general, gated recurrent units exhibit better performance, based on a case study on pulp paper presses. The final result demonstrates that, to maximize the equipment availability, gated recurrent units, as demonstrated in the paper, are the best options.
- Published
- 2021
- Full Text
- View/download PDF
22. Safe-Error Analysis of Post-Quantum Cryptography Mechanisms - Short Paper
- Author
-
Guénaël Renault, Luk Bettale, and Simon Montoya
- Subjects
Post-quantum cryptography ,Exploit ,Computer science ,business.industry ,Short paper ,Process (computing) ,Cryptography ,Fault (power engineering) ,Computer security ,computer.software_genre ,Error analysis ,NIST ,business ,computer - Abstract
The NIST selection process for standardizing Post-Quantum Cryptography Mechanisms is currently running. Many papers already studied their theoretical security, but the resistance in deployed device has not been much investigated so far. In particular, fault attack is a serious threat for algorithms implemented in embedded devices. One particularly powerful technique is to use safe-error attacks. Such attacks exploit the fact that a specific fault may or may not lead to a faulty output depending on a secret value. In this paper, we investigate the resistance of various Post-Quantum candidates algorithms against such attacks.
- Published
- 2021
23. Effects of Hemicellulose on Recycling Performance of Paper Based on Sisal Fibers
- Author
-
Yian Chen, Haisong Qi, Shaoliu Qin, Shenming Tao, Xingzhen Qin, Cunzhi Zhang, and Pan Chen
- Subjects
chemistry.chemical_compound ,Materials science ,chemistry ,Hemicellulose ,Paper based ,Pulp and paper industry ,computer ,SISAL ,computer.programming_language - Abstract
The pulp and paper industry growingly paid attention to the recycling and maintenance of waste paper products. Each paper-making cycle would lead to a sharp drop in the mechanical properties of the cellulosic paper, which was related to the hornification effect. Here, the recycling performance of the holocellulose paper was studied, compared with that of the cellulosic paper. Holocellulose fibers from sisal were fabricated by a gentle delignification method, and the well-preserved cellulose and hemicellulose components hindered the cocrystallization and aggregation of cellulose fibril. Holocellulose paper exhibited much more favorable recycling properties, compared with cellulosic paper. After 5 runs of recycling, holocellulose paper still shown an ultimate strength as high as 25 MPa (reduced from 35 MPa), a decrease of 27.1 %. However, cellulosic paper experienced a substantial loss in ultimate strength from 35 MPa to 9 MPa, a decrease of about 74 %. This can be attributed to the core-shell structure from cellulose and hemicellulose to weaken the hornification effect.
- Published
- 2021
24. Paper based analytical devices for blood grouping: a comprehensive review
- Author
-
Mahdi Aminian, Maliheh Paknejad, and Saeed Ebrahimi Fana
- Subjects
Paper ,Blood transfusion ,Computer science ,medicine.medical_treatment ,Point-of-Care Systems ,Microfluidics ,Biomedical Engineering ,Flow method ,Machine learning ,computer.software_genre ,Blood typing ,Antibodies ,Software portability ,Application areas ,ABO blood group system ,medicine ,Humans ,Molecular Biology ,business.industry ,Paper based ,Blood grouping ,Blood Grouping and Crossmatching ,Biological Assay ,Artificial intelligence ,business ,computer - Abstract
The clinical importance of blood group (BG) antigens is related to their ability to induce immune antibodies that can cause hemolysis. Yet, ABO and D (Rh) are still considered to be the key antigens for healthy blood transfusion and secondary antigens are the next priority. Serological typing is the most widely used typing method. Rapid and accurate blood grouping plays an important role in some clinical conditions, rather than conventional techniques. Hence, developing a simple and economical model for rapid blood grouping would facilitate these tests. In recent decades, paper-based microfluidics such as μPADs has gained much interest in wide application areas such as point-of-care diagnostic. In this study, we evaluated μPADs that are performed for blood grouping and its recent progress. A comprehensive literature search was performed using databases including PUBMED, SCOPUS, Web of Science and Google Scholar. Keywords were blood grouping or typing, paper analytical device, rapid test, etc. After investigation of search results, 16 papers from 2010 to 2020 were included. Further information in detail was classified in Table 1. Generally, two principles for blood typing μPADs are introduced. The lateral chromatographic flow method and the vertical flow-through method that detects BG in a visual-based manner. To detect results with acceptable clarity many factors and challenges like paper, blood sample, buffer, Ab and RBC interaction and also μPADs stability need to be considered, which are discussed. In conclusion, the simplicity, stability, cheapness, portability and biocompatibility of μPADs for blood grouping confirming its utility and also they have the capability to robust, universal blood-grouping platform. Table 1 Summary of blood grouping tests using paper-based analytical devices Antigens Type of diagnosis Validation method Sample No Accuracy Action time Paper type Stability Sample dilution Buffer Ref A, B, Rh Forward volunteers records 5 - - Whatman No. 4 - 1/2 PBS* (Khan et al. 2010) A, B, Rh Forward gel assay test and conventional slide test 100 100% 1 min Whatman No. 4 and Kleeenex paper towel 7 Days in 4 °C 1/1 NSS (Al-Tamimi et al. 2012) A, B, Rh Forward gel card assay 99 100% 20 Sec + Washing Kleeenex paper towel - 1/1 NSS (Li et al. 2012) A, B, Rh Forward - - - - Kleeenex paper towel - 45/100 PSS (Li et al. 2013) A, B, Rh Forward gel card assay 98 100% 1.5 min Kleeenex paper towel - 85/100 PBS (Guan et al. 2014b) C, E, c, e, K, Jka, Jkb, M, N, S, P1, and Lea Forward gel card assay 266 100% - Kleeenex paper towel - 1/1 NSS (Li et al. 2014b) A, B, Rh Forward and Reverse conventional slide test 96 ≈ 91% 10 min Whatman No. 1 21 Days in 4 °C 1/2 NSS (Noiphung et al. 2015) C, c, E, e, K, k, Fya, Fyb, Jka, Jkb, M, N, S and s, P1, Lea and Leb Forward - 478 - - Kleeenex paper towel - 1/1 NSS, PBS (Then et al. 2015) A, B Forward and Reverse conventional slide test 76 100% 5-8 min Whatman No. 4 38 Days in 4 °C 1/4, 1/1 NSS (Songjaroen and Laiwattanapaisal 2016) D, K Forward volunteers records 210 - 7.5 min Kleenex paper towel - 1/1 NSS (Yeow et al. 2016) A, B, c, e, D, C, E, M, N, S, s, P1, Jka, Jkb, Lea, Leb, Fya, and Fyb Forward and Reverse gel card assay 3550 ≈100% 30 s Fiber glass and cotton linter 180 Days in 25 °C 45/100, 1/1 PBS (Zhang et al. 2017) A, B Forward conventional slide test 598 100% 3 min Whatman No. 113 14 Day in 4 °C 1/1 NSS (Songjaroen et al. 2018) A, B, Rh Forward conventional slide test - - 30 Sec + Washing Unrefined sisal paper - 1/2 NSS (Casals-Terré et al. 2019) A, B, Rh Forward - - - - Whatman No.1 - 1/1 NSS (Ansari et al. 2020) ABORh Forward and Reverse conventional slide test - 100% Unrefined Eucalyptus papers - 1/2 NSS, PBS (Casals-Terré et al. 2020) A, B, Rh Forward - - - 30 Sec + Washing Whatman No. 4 modified with chitosan ≥ 100 days in 25 °C 1/1 NSS (Parween et al. 2020)
- Published
- 2021
25. A Coin-Free Oracle-Based Augmented Black Box Framework (Full Paper)
- Author
-
Kyosuke Yamashita, Mehdi Tibouchi, and Masayuki Abe
- Subjects
Black box (phreaking) ,Computer science ,Programming language ,Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,computer ,Full paper ,Oracle - Published
- 2020
26. Comparisons of Receptive and Expressive Vocabulary Performances between Computer-and Paper-based Test in Children with Language Development Delay
- Author
-
Ji Suk Park, Seong Hee Choi, Chul-Hee Choi, and Kyoung jae Lee
- Subjects
Linguistics and Language ,business.industry ,Communication ,Paper based ,computer.software_genre ,Test (assessment) ,Speech and Hearing ,Language development ,Expressive vocabulary ,Artificial intelligence ,business ,Psychology ,computer ,Natural language processing - Abstract
배경 및 목적: 본 연구는 언어발달지연 아동의 한국판 수용 · 표현 어휘력 검사에서 컴퓨터 기반(태블릿 PC) 매체와 종이 기반 매체를 이용하여 매체의 변화가 아동의 언어 수행력에 차이를 보이는지 알아보고자 하였다. 방법: 연구의 대상은 생활 연령이 만 3-7세의 언어발달지연 아동 27명을 대상으로 하였다. 언어발달지연 정도와 통합언어연령에 따라 각각 두...
- Published
- 2020
27. Award-Winning Undergraduate Paper: Computer Adoption Decisions: Implications for Research and Extension: The Case of Texas Rice Producers
- Author
-
Jarvis, Anne Marie
- Published
- 1990
- Full Text
- View/download PDF
28. Computer Computing and Simulation—In View of the Leaves’ Categories, Shapes and Mass
- Author
-
Li, Jiahong, Li, Heng, Fu, Qiang, Rannenberg, Kai, Editor-in-chief, Sakarovitch, Jacques, Series editor, Goedicke, Michael, Series editor, Tatnall, Arthur, Series editor, Neuhold, Erich J., Series editor, Pras, Aiko, Series editor, Tröltzsch, Fredi, Series editor, Pries-Heje, Jan, Series editor, Whitehouse, Diane, Series editor, Reis, Ricardo, Series editor, Furnell, Steven, Series editor, Furbach, Ulrich, Series editor, Gulliksen, Jan, Series editor, Rauterberg, Matthias, Series editor, Li, Daoliang, editor, and Chen, Yingyi, editor
- Published
- 2015
- Full Text
- View/download PDF
29. Analyzing the Accuracy of Answer Sheet Data in Paper-based Test Using Decision Tree
- Author
-
Aris Puji Widodo, Edy Suharto, and Suryono Suryono
- Subjects
education ,Computer science ,Decision tree ,Paper based ,data mining ,computer.software_genre ,lcsh:QA75.5-76.95 ,lcsh:HD72-88 ,Test (assessment) ,lcsh:Economic growth, development, planning ,Comprehension ,paper-based test ,Order (business) ,decision tree ,Data mining ,lcsh:Electronic computers. Computer science ,Raw data ,computer ,Single layer ,Test data - Abstract
In education quality assurance, the accuracy of test data is crucial. However, there is still a problem regarding to the possibility of incorrect data filled by test taker during paper-based test. On the contrary, this problem does not appear in computer-based test. In this study, a method was proposed in order to analyze the accuracy of answer sheet filling out in paper-based test using data mining technique. A single layer of data comprehension was added within the method instead of raw data. The results of the study were a web-based program for data pre-processing and decision tree models. There were 374 instances which were analyzed. The accuracy of answer sheet filling out attained 95.19% while the accuracy of classification varied from 99.47% to 100% depend on evaluation method chosen. This study could motivate the administrators for test improvement since it preferred computer-based test to paper-based.
- Published
- 2019
30. 29‐1: Invited Paper: International Standers Development of Electronic Paper Displays
- Author
-
Tatsumi Takahashi
- Subjects
Engineering ,Multimedia ,law ,business.industry ,Electronic paper ,Legibility ,business ,computer.software_genre ,computer ,Readability ,law.invention - Published
- 2019
31. Recommending the Title of a Research Paper Based on Its Abstract Using Deep Learning-Based Text Summarization Approaches
- Author
-
Sheetal Bhati, Pinaki Chakraborty, and Shweta Taneja
- Subjects
Recurrent neural network ,Text mining ,business.industry ,Computer science ,Deep learning ,The Internet ,Artificial intelligence ,Paper based ,business ,Machine learning ,computer.software_genre ,Automatic summarization ,computer - Abstract
Due to the increasing use of the Internet and other online resources, there is tremendous growth in the data of text documents. It is not possible to manage this huge data manually. This has led to the growth of fields like text mining and text summarization. This paper presents a title prediction model for research papers using Recursive Recurrent Neural Network (Recursive RNN) and evaluates its performance by comparing it with sequence-to-sequence models. Research papers published between 2018 and 2020 were obtained from a standard repository, viz. Kaggle, to train the title prediction model. The performance of different versions of Recursive RNN and Seq2Seq was evaluated in terms of training and hold-out loss. The experimental results show that Recursive RNN models perform significantly better than the other models.
- Published
- 2021
32. 'Scratch it out': carbon copy based paper devices for microbial assays and liver disease diagnosis
- Author
-
Anusha Prabhu, Naresh Kumar Mani, Revathi P Shenoy, Amrutha Hasandka, Hardik Ramesh Singhal, M. S. Giri Nandagopal, and Akshata Prabhu
- Subjects
Paper ,Carbon copy ,Fabrication ,Materials science ,General Chemical Engineering ,Liver Diseases ,Microfluidics ,General Engineering ,Nanotechnology ,Stencil ,Analytical Chemistry ,Scratch ,Point-of-Care Testing ,Technical training ,Humans ,Viability assay ,computer ,Hydrophobic and Hydrophilic Interactions ,Leakage (electronics) ,computer.programming_language - Abstract
We present a facile paper-based microfluidic device fabrication technique leveraging off-the-shelf carbon paper for the deposition of hydrophobic barriers using a novel "stencil scratching" method. This exceedingly frugal approach (0.05$) requires practically no technical training to employ. Hydrophobic barriers fabricated using this approach offer a width of 3 mm and a hydrophilic channel width of 849 μm, with an ability to confine major aqueous solvents without leakage. The utility of the device is demonstrated by porting a cell viability assay showing a limit-of-detection (LOD) of 0.6 × 108 CFU mL-1 and bilirubin assay with human serum showing a detection range of 1.76-6.9 mg dL-1 and a limit-of-detection (LOD) of 1.76 mg dL-1. The intuitiveness and economic viability of the fabrication method afford it great potential in the field of point-of-care diagnostics geared towards providing testing infrastructure in resource-scarce regions globally.
- Published
- 2021
33. A co‐training ‐based approach for the hierarchical multi‐label classification of research papers
- Author
-
Khalil Drira, Hatem Bellaaj, Abir Masmoudi, Mohamed Jmaiel, Université de Sfax - University of Sfax, Équipe Services et Architectures pour Réseaux Avancés (LAAS-SARA), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Unité de Recherche en développement et contrôle d'applications distribuées (REDCAD), École Nationale d'Ingénieurs de Sfax | National School of Engineers of Sfax (ENIS), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), and Université de Toulouse (UT)
- Subjects
0209 industrial biotechnology ,Computer science ,02 engineering and technology ,Semi-supervised learning ,Imbalanced data ,[INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE] ,Hierarchical Multi-label classication ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Set (abstract data type) ,Consistency (database systems) ,[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,020901 industrial engineering & automation ,Cardinality ,Co-training ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Multi-label classification ,Hierarchy (mathematics) ,business.industry ,[INFO.INFO-WB]Computer Science [cs]/Web ,Research papers classication ,Bibliographic coupling ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,[INFO.INFO-ET]Computer Science [cs]/Emerging Technologies [cs.ET] ,Artificial intelligence ,business ,computer - Abstract
International audience; This paper focuses on the problem of the hierarchical multi‐label classification of research papers, which is the task of assigning the set of relevant labels for a paper from a hierarchy, using reduced amounts of labelled training data. Specifically, we study leveraging unlabelled data, which are usually plentiful and easy to collect, in addition to the few available labelled ones in a semi‐supervised learning framework for achieving better performance results. Thus, in this paper, we propose a semi‐supervised approach for the hierarchical multi‐label classification task of research papers based on the well‐known Co‐training algorithm, which exploit content and bibliographic coupling information as two distinct papers' views. In our approach, two hierarchical multi‐label classifiers, are learnt on different views of the labelled data, and iteratively select their most confident unlabelled samples, which are further added to the labelled set. The success of our suggested Co‐training‐based approach lies in two main components. The first is the use of two suggested selection criteria (i.e., Maximum Agreement and Labels Cardinality Consistency) that enforce selecting confident unlabelled samples. The second is the appliance of an oversampling method that rebalances the labels distribution of the initial labelled set, which reduces the reinforcement of the label imbalance issue during the Co‐training learning. The proposed approach is evaluated using a collection of scientific papers extracted from the ACM digital library. Performed experiments show the effectiveness of our approach with regards to several baseline methods.
- Published
- 2021
34. A Comparison of Paper and Computer Administered Strengths and Difficulties Questionnaire
- Author
-
Patalay, Praveetha, Hayes, Daniel, Deighton, Jessica, and Wolpert, Miranda
- Published
- 2016
- Full Text
- View/download PDF
35. Process simulation-based evaluation of design and operational implications of water-laid paper machine conversion to foam technology
- Author
-
Lotta Sorsamäki, Eemeli Hytönen, and Antti Koponen
- Subjects
Environmental Engineering ,business.product_category ,foam forming ,Computer science ,business.industry ,Process (computing) ,Forming processes ,Bioengineering ,process configuration ,computer.software_genre ,process simulation ,Simulation software ,Paper machine ,water balance ,New product development ,Process simulation ,Process engineering ,business ,Waste Management and Disposal ,Dimensioning ,computer ,Electronic circuit ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Foam forming technology has attracted much attention during the past few years in the paper industry. Its advantages compared to conventional water forming are a new product portfolio and increased process efficiency. To support the paper industry in pushing foam forming technology forward, process simulation is needed to provide supporting data for strategic decision-making and as a basis for equipment dimensioning. This study examined the conversion of an existing wallpaper machine from water to foam forming technology using process simulation. To determine the required process configuration and parameter changes in the existing process, both published and unpublished data on the foam forming process were collected. This paper also describes modeling of the foam phase in the selected simulation software. The suitability of existing paper process equipment for foam was analyzed. Simulations revealed that undisturbed operation with foam requires some equipment modifications and re-arrangements in water circuits. With foam forming, the water balance in both short and long circulation changes remarkably compared to conventional water forming, leading to a large increase in the long circulation volume flows.
- Published
- 2021
36. Validation of Portuguese-translated computer touch-screen questionnaires in patients with rheumatoid arthritis and spondyloarthritis, compared with paper formats
- Author
-
Cunha-Miranda, Luís, Santos, Helena, Miguel, Cláudia, Silva, Cândida, Barcelos, Filipe, Borges, Joana, Trinca, Ricardo, Vicente, Vera, and Silva, Tiago
- Published
- 2015
- Full Text
- View/download PDF
37. Comparison of Computerised and Pencil-and-Paper Neuropsychological Assessments in Older Culturally and Linguistically Diverse Australians.
- Author
-
Page, Zara A., Croot, Karen, Sachdev, Perminder S., Crawford, John D., Lam, Ben C.P., Brodaty, Henry, Miller Amberber, Amanda, Numbers, Katya, and Kochan, Nicole A.
- Subjects
- *
NEUROPSYCHOLOGICAL tests , *NATIVE language , *AUSTRALIANS , *OLDER people , *COGNITIVE ability , *COGNITION disorders - Abstract
Objectives: Computerised neuropsychological assessments (CNAs) are proposed as an alternative method of assessing cognition to traditional pencil-and-paper assessment (PnPA), which are considered the "gold standard" for diagnosing dementia. However, limited research has been conducted with culturally and linguistically diverse (CALD) individuals. This study investigated the suitability of PnPAs and CNAs for measuring cognitive performance in a heterogenous sample of older, Australian CALD English-speakers compared to a native English-speaking background (ESB) sample. Methods: Participants were 1037 community-dwelling individuals aged 70–90 years without a dementia diagnosis from the Sydney Memory and Ageing Study (873 ESB, 164 CALD). Differences in the level and pattern of cognitive performance in the CALD group were compared to the ESB group on a newly developed CNA and a comprehensive PnPA in English, controlling for covariates. Multiple hierarchical regression was used to identify the extent to which linguistic and acculturation variables explained performance variance. Results: CALD participants' performance was consistently poorer than ESB participants on both PnPA and CNA, and more so on PnPA than CNA, controlling for socio-demographic and health factors. Linguistic and acculturation variables together explained approximately 20% and 25% of CALD performance on PnPA and CNA respectively, above demographics and self-reported computer use. Conclusions: Performances of CALD and ESB groups differed more on PnPAs than CNAs, but caution is needed in concluding that CNAs are more culturally-appropriate for assessing cognitive decline in older CALD individuals. Our findings extend current literature by confirming the influence of linguistic and acculturation variables on cognitive assessment outcomes for older CALD Australians. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Materializing interactions with paper prototyping: A case study of designing social, collaborative systems with older adults
- Author
-
Chien Wen Yuan, Benjamin V. Hanrahan, Mary Beth Rosson, Jordan Beck, and John M. Carroll
- Subjects
Computer science ,Retirement community ,General Engineering ,General Social Sciences ,computer.software_genre ,Computer Science Applications ,Arts and Humanities (miscellaneous) ,Artificial Intelligence ,Software deployment ,Human–computer interaction ,Bounded function ,Architecture ,Situated ,Community practice ,Collaboration ,computer ,User-centered design ,Paper prototyping - Abstract
Paper prototypes have been successful for gathering concrete, grounded user feedback. Most of these prototypes are used to elicit feedback during bounded design sessions. However, for systems that support new social practices these bounded sessions do not fully materialize the interactions the prototype supports. This paper presents a case study of a six-month paper prototype deployment that materialized collaborative interactions in care retirement community. Throughout this deployment the prototype was situated within the lives of participants, where they shaped the interactions and usage of the prototype over time. As these interactions materialized, the evoked community practice was unlike what participants envisioned during our first, more traditional paper prototyping and design sessions. This article shows how our interactive, paper prototype provided critical guidance in how realistic our imagined practice would be.
- Published
- 2019
39. Research paper classification systems based on TF-IDF and LDA schemes
- Author
-
Sang-Woon Kim and Joon-Min Gil
- Subjects
Scheme (programming language) ,General Computer Science ,Computer science ,LDA ,K-means clustering ,02 engineering and technology ,Latent Dirichlet allocation ,lcsh:QA75.5-76.95 ,symbols.namesake ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Information theory ,Cluster analysis ,tf–idf ,computer.programming_language ,Information retrieval ,Paper classification ,business.industry ,TF-IDF ,Information technology ,020206 networking & telecommunications ,Class (biology) ,lcsh:Q350-390 ,Term (time) ,ComputingMethodologies_PATTERNRECOGNITION ,symbols ,020201 artificial intelligence & image processing ,lcsh:Electronic computers. Computer science ,business ,computer - Abstract
With the increasing advance of computer and information technologies, numerous research papers have been published online as well as offline, and as new research fields have been continuingly created, users have a lot of trouble in finding and categorizing their interesting research papers. In order to overcome the limitations, this paper proposes a research paper classification system that can cluster research papers into the meaningful class in which papers are very likely to have similar subjects. The proposed system extracts representative keywords from the abstracts of each paper and topics by Latent Dirichlet allocation (LDA) scheme. Then, the K-means clustering algorithm is applied to classify the whole papers into research papers with similar subjects, based on the Term frequency-inverse document frequency (TF-IDF) values of each paper.
- Published
- 2019
40. Interference mitigation in heterogeneous networks with simple dirty paper coding
- Author
-
Krishna Balachandran, Kiran M. Rege, Kemal M. Karakayali, and Joseph H. Kang
- Subjects
Scheme (programming language) ,SIMPLE (military communications protocol) ,Computer Networks and Communications ,Computer science ,Frequency band ,Distributed computing ,020206 networking & telecommunications ,020302 automobile design & engineering ,02 engineering and technology ,Interference (wave propagation) ,0203 mechanical engineering ,0202 electrical engineering, electronic engineering, information engineering ,Dirty paper coding ,Electrical and Electronic Engineering ,computer ,Heterogeneous network ,Information Systems ,computer.programming_language - Abstract
In heterogeneous networks where macro cells and metro cells use the same frequency band to communicate with their respective users, the major problem limiting performance is the interference caused by macro cells to metro cells. The information theoretic result known as dirty paper coding provides a way to address this problem and significantly improve the performance of heterogeneous networks with co-channel deployment. In this paper, we show how a simple dirty paper coding scheme employing Tomlinson-Harashima pre-coding with partial interference pre-subtraction can be employed by metro cells to mitigate the interference caused by macro cells. A performance study included in this paper shows that the proposed dirty paper coding scheme can lead to significant improvement in user rate statistics.
- Published
- 2019
41. Position paper: GPT conjecture: understanding the trade-offs between granularity, performance and timeliness in control-flow integrity
- Author
-
Zhilong Wang and Peng Liu
- Subjects
Scheme (programming language) ,Computer engineering. Computer hardware ,Conjecture ,Computer Networks and Communications ,Computer science ,Trade-off ,Subject (documents) ,QA75.5-76.95 ,Computer security ,computer.software_genre ,TK7885-7895 ,Artificial Intelligence ,Ask price ,Electronic computers. Computer science ,Key (cryptography) ,Position paper ,Granularity ,Control-flow integrity ,computer ,Software ,Information Systems ,computer.programming_language - Abstract
Performance/security trade-off is widely noticed in CFI research, however, we observe that not every CFI scheme is subject to the trade-off. Motivated by the key observation, we ask three questions: ➊ does trade-off really exist in different CFI schemes? ➋ if trade-off do exist, how do previous works comply with it? ➌ how can it inspire future research? Although the three questions probably cannot be directly answered, they are inspiring. We find that a deeper understanding of the nature of the trade-off will help answer the three questions. Accordingly, we proposed theGPTconjecture to pinpoint the trade-off in designing CFI schemes, which says that at most two out of three properties (fine granularity, acceptable performance, and preventive protection) could be achieved.
- Published
- 2021
42. Automatic zone identification in scientific papers via fusion techniques
- Author
-
Kambiz Badie, Maryam Mahmoudi, and Nasrin Asadi
- Subjects
Fusion ,Computer science ,business.industry ,05 social sciences ,General Social Sciences ,Paper based ,Library and Information Sciences ,050905 science studies ,computer.software_genre ,Automatic summarization ,Computer Science Applications ,Identification (information) ,Information extraction ,Simple (abstract algebra) ,Sequential minimal optimization ,Artificial intelligence ,0509 other social sciences ,050904 information & library sciences ,business ,computer ,Natural language processing - Abstract
Zone identification is a topic in the area of text mining which helps researchers be benefited by the content of scientific papers in a satisfactory manner. The major aim of zone identification is to classify the sentences of scientific texts into some predefined zone categories which can be useful for summarization as well as information extraction. In this paper, we propose a two-level approach to zone identification within which the first level is in charge of classifying the sentences in a given paper based on some semantic and lexical features. In this respect, several machine learning algorithms such as Simple Logistics, Logistic Model Trees and Sequential Minimal Optimization are applied. The second level is responsible for applying fusion to the classification results obtained for consecutive sentences of the first level in order to make the final decision. The proposed method is evaluated on ART and DRI corpora as two well-known data sets. Results obtained for the accuracy of zone identification for these corpora are respectively 65.75% and 84.15%, which seem to be quite promising compared to those obtained by previous approaches.
- Published
- 2019
43. Sec-Lib: Protecting Scholarly Digital Libraries From Infected Papers Using Active Machine Learning Framework
- Author
-
Aviad Cohen, Jian Wu, Yuval Elovici, C. Lee Giles, Lior Rokach, Andrea Lanzi, and Nir Nissim
- Subjects
021110 strategic, defence & security studies ,General Computer Science ,Exploit ,Scholarly ,Computer science ,digital ,malware ,paper ,0211 other engineering and technologies ,General Engineering ,library ,02 engineering and technology ,Digital library ,computer.software_genre ,World Wide Web ,PDF documents ,0202 electrical engineering, electronic engineering, information engineering ,Malware ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,computer ,lcsh:TK1-9971 - Abstract
Researchers from academia and the corporate-sector rely on scholarly digital libraries to access articles. Attackers take advantage of innocent users who consider the articles' files safe and thus open PDF-files with little concern. In addition, researchers consider scholarly libraries a reliable, trusted, and untainted corpus of papers. For these reasons, scholarly digital libraries are an attractive-target and inadvertently support the proliferation of cyber-attacks launched via malicious PDF-files. In this study, we present related vulnerabilities and malware distribution approaches that exploit the vulnerabilities of scholarly digital libraries. We evaluated over two-million scholarly papers in the CiteSeerX library and found the library to be contaminated with a surprisingly large number (0.3-2%) of malicious PDF documents (over 55% were crawled from the IPs of US-universities). We developed a two layered detection framework aimed at enhancing the detection of malicious PDF documents, Sec-Lib, which offers a security solution for large digital libraries. Sec-Lib includes a deterministic layer for detecting known malware, and a machine learning based layer for detecting unknown malware. Our evaluation showed that scholarly digital libraries can detect 96.9% of malware with Sec-Lib, while minimizing the number of PDF-files requiring labeling, and thus reducing the manual inspection efforts of security-experts by 98%.
- Published
- 2019
44. Robust authentication for paper-based text documents based on text watermarking technology
- Author
-
Zong Ming Guo, Tong Zhang, Yuxin Liu, Xi Feng Fang, Wei Guo, and Wen Fa Qi
- Subjects
Scheme (programming language) ,Computer science ,Data_MISCELLANEOUS ,02 engineering and technology ,Pattern Recognition, Automated ,Image (mathematics) ,0502 economics and business ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Digital watermarking ,Computer Security ,Language ,computer.programming_language ,Authentication ,Information retrieval ,Content integrity ,Applied Mathematics ,05 social sciences ,Watermark ,General Medicine ,Paper based ,Data Compression ,Computational Mathematics ,Nonlinear Dynamics ,Modeling and Simulation ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Embedding ,020201 artificial intelligence & image processing ,General Agricultural and Biological Sciences ,computer ,Algorithms ,Medical Informatics ,Software ,050203 business & management - Abstract
Aiming at the problem of easy tampering and difficult integrity authentication of paper text documents, this paper proposes a robust content authentication method for printed documents based on text watermarking scheme resisting print-and-scan attack. Firstly, an authentication watermark signal sequence related to content of text document is generated based on the Logistic chaotic map model; then, the authentication watermark signal sequence is embedded into printed paper document by using a robust text watermarking scheme; finally, the watermark information is extracted from scanned image of paper document, and compared with the authentication watermark information calculated in real time by the text document content obtained by OCR technology, thereby performing content integrity authentication of the paper text documents. Experimental results show that our method can achieve the robust content integrity authentication of paper text documents, and can also accurately locate the tampering position. In addition, the document after embedding the watermark information has a good visual effect, and the text watermarking scheme has a large information capacity.
- Published
- 2019
45. EXPERIMENTAL DESIGN TO ANALYZE THE EFFECT OF WRITING FORMAT ON STUDENT READING COMPREHENSION ON PAPER MEDIA USING COMPLETELY RANDOMIZED FACTORIAL DESIGN
- Author
-
Yuswono Hadi and Rexy Satrya Yuwana Adam
- Subjects
lcsh:T55.4-60.8 ,business.industry ,Computer science ,media_common.quotation_subject ,completely randomized factorial experimental design ,Factorial experiment ,computer.software_genre ,writing format ,Test (assessment) ,Factorial experimental design ,Environmental destruction ,Reading comprehension ,excessive paper usage ,Reading (process) ,Multiple comparisons problem ,lcsh:Industrial engineering. Management engineering ,Artificial intelligence ,business ,computer ,Natural language processing ,media_common - Abstract
Paper writing with the existing writing format causing excessive paper usage. Excessive paper usage could lead to environmental destruction. Therefore, it is necessary to find an alternative in the writing procedure by doing an experimental design based on student reading comprehensive. This study uses Completely Randomized Factorial Experimental Design. After analysis of variance and the multiple comparisons test done, it results a new writing format alternative that use less paper than the existing writing procedure. In other hand, it results greater student reading comprehensive than the existing writing procedure. With the results, the application of the alternative procedure can save the paper usage and make student easier to understand the paper .
- Published
- 2018
46. A TEI Customization for Paper and Watermarks Descriptions
- Author
-
Ermenegilda Müller
- Subjects
Information retrieval ,computer.internet_protocol ,Relational database ,Computer science ,International standard ,metadata ,lcsh:D111-203 ,bibliography ,lcsh:Medieval history ,Context (language use) ,TEI ,Paper History ,Bibliography ,Manuscript Studies ,Metadata ,XML ,tei ,xml ,Asset (computer security) ,Personalization ,paper history ,Key (cryptography) ,computer ,manuscript studies ,bibliography, manuscript studies, paper history, book history - Abstract
Watermarks are key to retracing the origins of paper manuscripts and early printed books and to understanding the context in which they were produced. TEI-P5, one of the most commonly used XML standards for digital descriptions of text-bearing objects, offers the possibility to describe watermarks, but not yet in a sufficiently detailed and consistent manner. The present article introduces a TEI-P5 customization for the description of paper and watermarks based on the International Standard for the Description of Paper, Watermarks and Paper Moulds in Relational Databases (IPHN 2.1.1). This customization provides TEI users with tools to make detailed, structured and standardized descriptions of paper, watermarks, and paper moulds. Such descriptions have the potential to improve the communication and collaboration between the different scholars working on paper manuscripts and early printed books. Moreover, once organized into a database, they can be mined in order to determine the origin and circulation of paper types. Therefore, the present customization can represent a strong asset in the study of the origins and material contexts of paper documents, handwritten or printed.
- Published
- 2020
47. Optimization of the development of latent fingermarks on thermal papers
- Author
-
Laurent Tamisier, Pierre Ledroit, Marianne Malo, Damien Henrot, and Florine Hallez
- Subjects
Paper ,Hot Temperature ,Luminescent Agents ,Time Factors ,business.industry ,Computer science ,Ninhydrin ,Acetates ,Comparative trial ,Machine learning ,computer.software_genre ,Pathology and Forensic Medicine ,Magnetic powder ,Acetone ,Indans ,Humans ,Indicators and Reagents ,Cyanoacrylates ,Artificial intelligence ,Dermatoglyphics ,Powders ,Volatilization ,business ,Law ,computer - Abstract
Thermal papers, commonly used for printed receipts or lottery tickets, are omnipresent in our everyday life. They are regarded as semi-porous substrates, and yet can be critical to analyze when looking for latent fingermarks due to their thermosensibility. The aim of this study was to investigate a development sequence that would better combine the adequate detection techniques in order to maximize the chances to develop latent fingermarks left on these substrates. Different methods of development have been compared on test substrates: black magnetic powder, Lumicyano™, thermal development, ninhydrin and 1,2-indanedione/ZnCl2. Whitening stages and thermal development have been focused on, tested and optimized. The results of these preliminary tests enabled the study of three development sequences. They have subsequently been compared to the one currently used in the Gendarmerie's laboratories and the best results have been provided during pseudo-operational comparative trials by one of these sequences, consisting in 6 stages.
- Published
- 2019
48. Blockchain based secure data sharing system for Internet of vehicles: A position paper
- Author
-
Jiangtao Li, Mingxing Luo, Man Ho Au, Tong Chen, Shengwei Tian, Kim-Kwang Raymond Choo, and Lei Zhang
- Subjects
Cryptocurrency ,Blockchain ,business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,010501 environmental sciences ,Computer security ,computer.software_genre ,01 natural sciences ,Data sharing ,Automotive Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Position paper ,The Internet ,Electrical and Electronic Engineering ,business ,computer ,0105 earth and related environmental sciences ,Block (data storage) - Abstract
One of the benefits of Internet of Vehicles (IoV) is improved traffic safety and efficiency, for example due to the capability to share vehicular messages in real-time. While most of the vehicular messages only need to be shared by nearby vehicles, some messages (e.g., announcement messages) may need to be more broadly distributed, for example to vehicles in a wider region. Finding a single trusted entity to store and distribute such messages can be challenging, and vehicles may not be inclined to participate (e.g., generation and distribution of announcement messages) unless they can benefit from such participation. In addition, achieving both security and privacy can be challenging. In this paper, we propose a blockchain based secure data sharing system to address the above challenges in an IoV setting. Specifically, in our system, announcement messages are stored using blockchain. To encourage/incentivize participation, vehicles that faithfully broadcast the announcement messages and/or contribute to the block generation will be rewarded by some cryptocurrency. Our system is also designed to be privacy-preserving and realizes both priori and posteriori countermeasures.
- Published
- 2019
49. Short Paper: Mechanized Proofs of Verifiability and Privacy in a Paper-Based E-Voting Scheme
- Author
-
Marie-Laure Zollinger, Peter Y. A. Ryan, and Peter B. Rønne
- Subjects
Scheme (programming language) ,050101 languages & linguistics ,Theoretical computer science ,Cryptographic primitive ,Computer science ,media_common.quotation_subject ,05 social sciences ,Short paper ,02 engineering and technology ,Gas meter prover ,Mathematical proof ,Voting ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,computer ,Formal verification ,Protocol (object-oriented programming) ,computer.programming_language ,media_common - Abstract
Electryo is a paper-based voting protocol that implements the Selene mechanism for individual verifiability. This short paper aims to provide the first formal model of Electryo, with security proofs for vote-privacy and individual verifiability. In general, voting protocols are complex constructs, involving advanced cryptographic primitives and strong security guarantees, posing a serious challenge when wanting to analyse and prove security with formal verification tools. Here we choose to use the \({{\,\mathrm{\textsc {Tamarin}}\,}}\) prover since it is one of the more advanced tools and is able to handle many of the primitives we encounter in the design and analysis of voting protocols.
- Published
- 2020
50. White Paper: the Provision of School Psychological Services to Dual Language Learners
- Author
-
Cathi Draper Rodríguez, Pedro Olvera, and Carol Robinson-Zañartu
- Subjects
Medical education ,05 social sciences ,School psychology ,050301 education ,Spec# ,Educational psychology ,General Medicine ,White paper ,Second language ,Dual language ,0501 psychology and cognitive sciences ,Psychology ,0503 education ,computer ,Neuroscience of multilingualism ,Competence (human resources) ,050104 developmental & child psychology ,computer.programming_language - Abstract
School psychologists are increasingly tasked with assessing, supporting, and intervening with dual language learner (DLL) students, their teachers, and their families. Understanding the assets of bilingualism along with multiple issues associated with comprehensive practice with DLL has become paramount. Currently, practitioners often lack the depth in knowledge of cultural variables, dual language acquisition, knowledge of programs to effectively serve DLL students, bilingual assessment, and research and evidence-based practice to serve DLLs competently, as well as depth in second language competence. This white paper, endorsed by the School Psychology Educators of California (SPEC) and the California Association of School Psychologists (CASP), outlines areas of competence deemed to be essential to all psychologists, as well as additional areas of competence for practitioners identifying themselves bilingual school psychologists.
- Published
- 2019
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.