2,781 results
Search Results
2. Comparative Analysis of Reduction Methods on Provenance Graphs for APT Attack Detection
- Author
-
Gesell, Jan Eske, Buchta, Robin, Dangendorf, Kilian, Franzke, Pascal, Heine, Felix, Kleiner, Carsten, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mosbah, Mohamed, editor, Sèdes, Florence, editor, Tawbi, Nadia, editor, Ahmed, Toufik, editor, Boulahia-Cuppens, Nora, editor, and Garcia-Alfaro, Joaquin, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Dynamic Data Inclusion with Sliding Window
- Author
-
Sanderson, Dominic, Kalganova, Tatiana, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Nagar, Atulya K., editor, Jat, Dharm Singh, editor, Mishra, Durgesh Kumar, editor, and Joshi, Amit, editor
- Published
- 2024
- Full Text
- View/download PDF
4. Application of attenuated total reflectance—Fourier transform infrared spectroscopy—(ATR-FTIR) and principal component analysis (PCA) in identification of copying pencils on different paper substrates.
- Author
-
Grzelec, Małgorzata
- Subjects
- *
GENTIAN violet , *PRINCIPAL components analysis , *METHYLENE blue , *DATA reduction , *PENCILS - Abstract
Presence of copying pencils in heritage objects poses a significant challenge for conservators due to their proneness to fading, sensitivity to solvents and difficulties in differentiation from regular graphite pencils. In this paper a method of copying pencils identification by means of ATR—FTIR spectroscopy is used. A protocol for spectra processing and dimensionality reduction of spectral data by means of principal component analysis has been developed, allowing for pencil types differentiation and providing an easy to read visual representation of the collected data. The protocol has been developed on mock-up samples and tested on objects from the archives of the State Museum Auschwitz – Birkenau. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Healthy Diet Food Decision Using Rough-Chi-Squared Goodness
- Author
-
Efendi, Riswan, Sahid, Dadang S. S., Putra, Emansa H., Deris, Mustafa M., Annisa, Nurul G., Karina, Sari, Indah M., Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Mat Jizat, Jessnor Arif, editor, Khairuddin, Ismail Mohd, editor, Mohd Razman, Mohd Azraai, editor, Ab. Nasir, Ahmad Fakhri, editor, Abdul Karim, Mohamad Shaiful, editor, Jaafar, Abdul Aziz, editor, Hong, Lim Wei, editor, Abdul Majeed, Anwar P. P., editor, Liu, Pengcheng, editor, Myung, Hyun, editor, Choi, Han-Lim, editor, and Susto, Gian-Antonio, editor
- Published
- 2021
- Full Text
- View/download PDF
6. Enhancing the Interactive Visualisation of a Data Preparation Tool from in-Memory Fitting to Big Data Sets
- Author
-
Epelde, Gorka, Álvarez, Roberto, Beristain, Andoni, Arrúe, Mónica, Arangoa, Itsasne, Rankin, Debbie, van der Aalst, Wil, Series Editor, Mylopoulos, John, Series Editor, Rosemann, Michael, Series Editor, Shaw, Michael J., Series Editor, Szyperski, Clemens, Series Editor, Abramowicz, Witold, editor, and Klein, Gary, editor
- Published
- 2020
- Full Text
- View/download PDF
7. Hybrid Entropy Method for Large Data Set Reduction Using MLP-ANN and SVM Classifiers
- Author
-
Rashmi, Ghose, Udayan, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Batra, Usha, editor, Roy, Nihar Ranjan, editor, and Panda, Brajendra, editor
- Published
- 2020
- Full Text
- View/download PDF
8. Data Reduction Algorithm for the Electric Bus Scheduling Problem
- Author
-
Janovec, Maros, Kohani, Michal, Neufeld, Janis S., editor, Buscher, Udo, editor, Lasch, Rainer, editor, Möst, Dominik, editor, and Schönberger, Jörn, editor
- Published
- 2020
- Full Text
- View/download PDF
9. New Fuzzy Logic-Based Methods for the Data Reduction
- Author
-
Tati, Reyhaneh, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Ruediger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, and Montaser Kouhsari, Shahram, editor
- Published
- 2019
- Full Text
- View/download PDF
10. WOA-MLSVMs Dirty Degree Identification Method Based on Texture Features of Paper Currency Images.
- Author
-
Wei-Zhong Sun, Yue Ma, Zhen-Yu Yin, Jie-Sheng Wang, Ai Gu, and Fu-Jun Guo
- Subjects
SUPPORT vector machines ,TEXTURES ,IMAGE transmission ,IMAGE sensors ,DATA reduction ,GABOR filters - Abstract
The dirty degree of banknotes determines to some extent whether banknotes can continue to circulate. This paper proposes a whale optimization algorithm based multi-layer support vector machine (WOA-MLSVMs) dirty degree recognition method based on the texture characteristics of banknote images. Based on the contact image sensor to collect the double-sided reflection images of the banknotes under red, green, blue, infrared and ultraviolet light, as well as the transmission images under the green light and infrared light, 22 texture characteristic parameters of the banknotes image based on the gray-scale co-occurrence matrix (GLCM) are extracted to describe the visual characteristics of the banknotes dirty degree, such as energy, entropy and inertia, etc. The banknotes images are selected based on the dirty degree recognition results of MLSVMs to establish the full-spectrum banknote dirty degree recognition sample data set. Five essential dimension estimation methods and seventeen data dimension reduction methods are combined to determine the essential dimension and the optimal dimension reduction method. Finally, WOA-MLSVMs realizes the full-spectrum banknote dirty degree recognition and the simulation results show the effectiveness of the proposed strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
11. Binning Based Data Reduction for Vector Field Data of a Particle-In-Cell Fusion Simulation
- Author
-
Kress, James, Choi, Jong, Klasky, Scott, Churchill, Michael, Childs, Hank, Pugmire, David, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Yokota, Rio, editor, Weiland, Michèle, editor, Shalf, John, editor, and Alam, Sadaf, editor
- Published
- 2018
- Full Text
- View/download PDF
12. Monitoring Data Reduction in Data Centers: A Correlation-Based Approach
- Author
-
Peng, Xuesong, Pernici, Barbara, Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Helfert, Markus, editor, Klein, Cornel, editor, Donnellan, Brian, editor, and Gusikhin, Oleg, editor
- Published
- 2017
- Full Text
- View/download PDF
13. Parameterized Algorithms for Power-Efficient Connected Symmetric Wireless Sensor Networks
- Author
-
Bentert, Matthias, van Bevern, René, Nichterlein, André, Niedermeier, Rolf, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Fernández Anta, Antonio, editor, Jurdzinski, Tomasz, editor, Mosteiro, Miguel A., editor, and Zhang, Yanyong, editor
- Published
- 2017
- Full Text
- View/download PDF
14. Coefficient–Based Spline Data Reduction by Hierarchical Spaces
- Author
-
Bracco, Cesare, Giannelli, Carlotta, Sestini, Alessandra, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Floater, Michael, editor, Lyche, Tom, editor, Mazure, Marie-Laurence, editor, Mørken, Knut, editor, and Schumaker, Larry L., editor
- Published
- 2017
- Full Text
- View/download PDF
15. Factors Forming Community Resilience Affected By Floods.
- Author
-
Suhaeb, Firdaus W., Rasyid, Sri Jayanti, Wahda, Muhammad Aksha, Ramli, Mauliadi, and Kaseng, Ernawati S.
- Subjects
HAZARD mitigation ,FLOODS ,RIVER conservation ,DATA reduction ,CONFERENCE papers ,JOB stress - Abstract
This Conference Paper uses descriptive-qualitative research to describe and analyze factors forming community resilience affected by floods. The determination of informants is determined deliberately on flood-affected communities using criteria according to the purpose of the study. Primary data were obtained from in-depth observations and interviews, while secondary data were obtained from library sources and relevant data. The data collection techniques used were observation, interviews, and documentation, while the data were analyzed using descriptive-qualitative analysis through several stages, namely data reduction, data presentation, and conclusions. The results of this study show that factors forming the resilience of the community to flood disasters are: (1) Value factors that have existed in the community for many years in the flooded land, namely mutual assistance; (2) Economic factors, finding alternative jobs or coping strategies by flood-affected communities; (3) Social factors, namely knowledge and skills to adapt to flood disasters through non-formal training and counselling on disaster and disaster mitigation from their experience or obtained through social media and mass media, such as television and radio; (4) Institutional factors, namely socialization of early flood warning, socialization of flood disaster mitigation, appeals for the prohibition of throwing garbage in rivers and essential food assistance before and during the occurrence of flood by the relevant government; and (5) Infrastructure factors that include the construction of facilities and infrastructure such as river dredging, drainage construction, and river cliff protection walls. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Review paper on Software Data Reduction for Bug Triage
- Author
-
Miss. Karande Rupali D and H P Khandagale
- Subjects
World Wide Web ,Software ,Computer science ,business.industry ,business ,Data science ,Triage ,Data reduction - Published
- 2017
17. The Caltech-NRAO Stripe 82 Survey (CNSS) Paper I: The Pilot Radio Transient Survey In 50 deg$^2$
- Author
-
Shrinivas R. Kulkarni, M. M. Kasliwal, Kunal Mooley, Stephen Bourke, S. T. Myers, Eric C. Bellm, Gregg Hallinan, Dale A. Frail, Russ R. Laher, S. B. Cenko, David Levitan, Assaf Horesh, and Yi Cao
- Subjects
Active galactic nucleus ,Proper motion ,media_common.quotation_subject ,FOS: Physical sciences ,Astrophysics ,01 natural sciences ,Jansky ,0103 physical sciences ,Transient (computer programming) ,010303 astronomy & astrophysics ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,Solar and Stellar Astrophysics (astro-ph.SR) ,media_common ,Physics ,High Energy Astrophysical Phenomena (astro-ph.HE) ,010308 nuclear & particles physics ,Astronomy and Astrophysics ,Light curve ,Astrophysics - Astrophysics of Galaxies ,Galaxy ,Astrophysics - Solar and Stellar Astrophysics ,Space and Planetary Science ,Sky ,Astrophysics of Galaxies (astro-ph.GA) ,Astrophysics - High Energy Astrophysical Phenomena ,Astrophysics - Instrumentation and Methods for Astrophysics ,Data reduction - Abstract
We have commenced a multi-year program, the Caltech-NRAO Stripe 82 Survey (CNSS), to search for radio transients with the Jansky VLA in the SDSS Stripe 82 region. The CNSS will deliver five epochs over the entire $\sim$270 deg$^2$ of Stripe 82, an eventual deep combined map with a rms noise of $\sim$40 $\mu$Jy and catalogs at a frequency of 3 GHz, and having a spatial resolution of 3". This first paper presents the results from an initial pilot survey of a 50 deg$^2$ region of Stripe 82, involving four epochs spanning logarithmic timescales between one week and 1.5 years, with the combined map having a median rms noise of 35 $\mu$Jy. This pilot survey enabled the development of the hardware and software for rapid data processing, as well as transient detection and follow-up, necessary for the full 270 deg$^2$ survey. Classification of variable and transient sources relied heavily on the wealth of multi-wavelength data in the Stripe 82 region, supplemented by repeated mapping of the region by the Palomar Transient Factory. $3.9^{+0.5}_{-0.9}$% of the detected point sources were found to vary by greater than 30%, consistent with similar studies at 1.4 GHz and 5 GHz. Multi-wavelength photometric data and light curves suggest that the variability is mostly due to shock-induced flaring in the jets of AGN. Although this was only a pilot survey, we detected two bona fide transients, associated with an RS CVn binary and a dKe star. Comparison with existing radio survey data revealed additional highly variable and transient sources on timescales between 5-20 years, largely associated with renewed AGN activity. The rates of such AGN possibly imply episodes of enhanced accretion and jet activity occurring once every $\sim$40,000 years in these galaxies. We compile the revised radio transient rates and make recommendations for future transient surveys and joint radio-optical experiments. (Abridged), Comment: 26 pages, 22 figures, 4 tables. Accepted for publication in the Astrophysical Journal. Data products (images, catalogs, tables, and light curves) available at http://tauceti.caltech.edu/stripe82 . A regularly-updated compilation of radio transient surveys is available at http://www.tauceti.caltech.edu/kunal/radio-transient-surveys/index.html
- Published
- 2016
- Full Text
- View/download PDF
18. Methods for motion artifact reduction in online brain-computer interface experiments: a systematic review.
- Author
-
Schmoigl-Tonis, Mathias, Schranz, Christoph, and Müller-Putz, Gernot R.
- Subjects
BRAIN-computer interfaces ,OPEN access publishing ,EVIDENCE gaps ,VIRTUAL communities ,ELECTROMAGNETIC induction ,DATA reduction - Abstract
Brain-computer interfaces (BCIs) have emerged as a promising technology for enhancing communication between the human brain and external devices. Electroencephalography (EEG) is particularly promising in this regard because it has high temporal resolution and can be easily worn on the head in everyday life. However, motion artifacts caused by muscle activity, fasciculation, cable swings, or magnetic induction pose significant challenges in real-world BCI applications. In this paper, we present a systematic review of methods for motion artifact reduction in online BCI experiments. Using the PRISMA filter method, we conducted a comprehensive literature search on PubMed, focusing on open access publications from 1966 to 2022. We evaluated 2,333 publications based on predefined filtering rules to identify existing methods and pipelines for motion artifact reduction in EEG data. We present a lookup table of all papers that passed the defined filters, all used methods, and pipelines and compare their overall performance and suitability for online BCI experiments. We summarize suitable methods, algorithms, and concepts for motion artifact reduction in online BCI applications, highlight potential research gaps, and discuss existing community consensus. This review aims to provide a comprehensive overview of the current state of the field and guide researchers in selecting appropriate methods for motion artifact reduction in online BCI experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. MULTIREDSHIFT LIMITS ON THE 21 cm POWER SPECTRUM FROM PAPER
- Author
-
James E. Aguirre, Daniel C. Jacobs, Richard F. Bradley, Irina I. Stefan, David F. Moore, David MacMahon, Adrian Liu, Zaki S. Ali, David DeBoer, William P. Walbrugh, Aaron R. Parsons, Judd D. Bowman, Jonathan C. Pober, Matthew R. Dexter, Jason Manley, Nicole E. Gugliucci, Chris Carilli, and Pat Klima
- Subjects
Physics ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,Dynamic range ,Astrophysics::Instrumentation and Methods for Astrophysics ,FOS: Physical sciences ,Spectral density ,Astronomy and Astrophysics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Astrophysics ,Precision Array for Probing the Epoch of Reionization ,Electromagnetic interference ,Redshift ,law.invention ,Telescope ,Space and Planetary Science ,law ,Astrophysics - Instrumentation and Methods for Astrophysics ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,Reionization ,Astrophysics::Galaxy Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics ,Data reduction - Abstract
The epoch of reionization power spectrum is expected to evolve strongly with redshift, and it is this variation with cosmic history that will allow us to begin to place constraints on the physics of reionization. The primary obstacle to the measurement of the EoR power spectrum is bright foreground emission. We present an analysis of observations from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) telescope which place new limits on the HI power spectrum over the redshift range of $7.5, Comment: 9 pages, submitted to apj, fixed author list and abstract typo
- Published
- 2015
20. Long-term preservation of big data: prospects of current storage technologies in digital libraries
- Author
-
Bhat, Wasim Ahmad
- Published
- 2018
- Full Text
- View/download PDF
21. A low‐cost method for testing and analyzing the cervical range of motion.
- Author
-
Zhang, Xun, Xu, Guanghua, Li, Zejin, Teng, Zhicheng, Zhang, Xin, and Zhang, Sicong
- Subjects
RANGE of motion of joints ,ROTATIONAL motion ,TEST methods ,SPONDYLOSIS ,DECOMPOSITION method ,DATA reduction - Abstract
Measurement of the cervical range of motion (CROM) is significant for early diagnosis of cervical spondylosis and determination of the severity of the disease. Aiming at the problem of convenient and continuous measurement of CROM and the reduction of the impact of data fluctuations, this paper proposes a low‐cost method for testing and analyzing CROM. This paper analyzes the correspondence between smartphone orientation sensor angle and CROM, obtains smartphone orientation sensing data, and uses the extreme‐point symmetric mode decomposition method by calculating the energy of the difference value to extract the adaptive global mean curve of the CROM data, and calculates the cervical ROM. The statistical analysis of the CROM test results is carried out. Results show that the method proposed can obtain the CROM including flexion and extension, lateral flexion, and rotation in a single measurement, which has the advantages of continuous measurement and low cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Energy function-based and norm-free event-triggering for scheduling control data transmissions.
- Author
-
Kurtoglu, Deniz, Yucelen, Tansel, and Muse, Jonathan A.
- Subjects
DATA transmission systems ,STATE feedback (Feedback control systems) ,ENERGY function ,SCHEDULING ,DATA reduction - Abstract
This paper studies the problem of scheduling control data transmissions from embedded processors to physical systems. For this problem, we propose novel state feedback and output feedback control architectures that are predicated on energy function-based and norm-free event-triggering conditions, where the embedded processor broadcasts a sampled data of its control signal value through a zero-order-hold operator to the physical system when the left side of the event-triggering condition equals to its right side. In this context, the energy function-based feature means that the right sides of the proposed event-triggering conditions involve an energy function as well as its time-derivative to make the selection of these right sides user-adjustable. Furthermore, the norm-free feature means that the left sides of the proposed event-triggering conditions do not depend on signal norms to allow for better control data transmission reduction. System-theoretical analyses of our event-triggered state feedback and output feedback control architectures are shown using the same energy functions and their time-derivatives used in the proposed event-triggering conditions, where illustrative numerical examples are also presented to demonstrate the efficacy of our contributions. To the best of our knowledge, the results given in this paper explore how event-triggering conditions are linked not only to energy functions but also to their time-derivatives for the first time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Feature reduction of unbalanced data classification based on density clustering.
- Author
-
Wang, Zhen-Fei, Yuan, Pei-Yao, Cao, Zhong-Ya, and Zhang, Li-Ying
- Subjects
FEATURE selection ,DATA reduction ,CLASSIFICATION algorithms ,BIG data - Abstract
With the development of big data, the problem of imbalanced data sets is becoming more and more serious. When dealing with high-dimensional imbalanced datasets, traditional classification algorithms usually tend to favor the majority class and ignore the minority class, which results in poor classification performance. In this paper, we study the issue of high-dimensional imbalanced dataset classification and propose a feature selection algorithm based on density clustering and importance measure (DBIM). DBIM firstly constructs multiple balanced subsets by randomly under-sampling the majority classes with the same number of samples as the minority classes and uses DBSCAN as the base classifier. This process quickly discovers feature distribution features based on density and generates the initial feature subspace. To select features with a strong classification of class labels, we propose to rank and select the generated initial feature subspace according to their importance. To avoid the redundancy between features and generate high-quality feature subsets, we further propose to design a new class distribution-based weight index combined with the redundancy evaluation index in the DBIM algorithm to calculate between features. Experimental results on eight publicly available datasets show that the DBIM algorithm proposed in this paper can generate feature subsets with high relevance and low redundancy, and can effectively reduce the dimensionality of high-dimensional imbalanced datasets and improve the classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Implementation of the Directorate General of Treasury's Electronic-Office Online Application at the Makassar State Treasury Service Office.
- Author
-
Nasrullah, Muh., Siraj, Muhammad Luthfi, Didin, and Wardah, Siti Syarifah Wafikah
- Subjects
RESEARCH methodology ,GOVERNMENT policy ,OUTREACH programs ,DATA reduction - Abstract
This study aims to find out and analyze the implementation of the Directorate General of Treasury application at the Makassar State Treasury Service Office from a communication perspective. This research method is a qualitative type with a policy approach to analyze government policies through data collection techniques of interviews, observation and documentation as well as analytical techniques used qualitative interactive models with four paths, namely data reduction, data presentation and drawing conclusions. The results showed that the implementation of the e-DJPb Online Application at the Makassar State Treasury Service Office was considered to be implemented quite effectively and efficiently. This is marked by the process of implementing inter-agency communication from the Directorate General of Taxes from the central to the regions which has been informed in stages regarding the regulations governing the implementation of the e-DJPb application which has been transformed into an online-based service. Apart from that the level of communication by the Makassar state treasury service office has also provided clear and transparent information to the public through regular outreach activities and this information has also been conveyed by local officials to the communities it serves. With the distribution of information carried out by local employees, it makes it easier for the community to do tax administration online and creates paper and cost efficiency so that the local government no longer spends the budget for administrative purposes that have been integrated into the online system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Dimensionality reduction model based on integer planning for the analysis of key indicators affecting life expectancy.
- Author
-
Cui, Wei, Xu, Zhiqiang, and Mu, Ren
- Subjects
LIFE expectancy ,INTEGER programming ,DATA reduction ,DATA mining ,DATA visualization ,WORLD health - Abstract
Exploring a dimensionality reduction model that can adeptly eliminate outliers and select the appropriate number of clusters is of profound theoretical and practical importance. Additionally, the interpretability of these models presents a persistent challenge. This paper proposes two innovative dimensionality reduction models based on integer programming (DRMBIP). These models assess compactness through the correlation of each indicator with its class center, while separation is evaluated by the correlation between different class centers. In contrast to DRMBIP-p, the DRMBIP-v considers the threshold parameter as a variable aiming to optimally balances both compactness and separation. This study, getting data from the Global Health Observatory (GHO), investigates 141 indicators that influence life expectancy. The findings reveal that DRMBIP-p effectively reduces the dimensionality of data, ensuring compactness. It also maintains compatibility with other models. Additionally, DRMBIP-v finds the optimal result, showing exceptional separation. Visualization of the results reveals that all classes have a high compactness. The DRMBIP-p requires the input of the correlation threshold parameter, which plays a pivotal role in the effectiveness of the final dimensionality reduction results. In the DRMBIP-v, modifying the threshold parameter to variable potentially emphasizes either separation or compactness. This necessitates an artificial adjustment to the overflow component within the objective function. The DRMBIP presented in this paper is adept at uncovering the primary geometric structures within high-dimensional indicators. Validated by life expectancy data, this paper demonstrates potential to assist data miners with the reduction of data dimensions. To our knowledge, this is the first time that integer programming has been used to build a dimensionality reduction model with indicator filtering. It not only has applications in life expectancy, but also has obvious advantages in data mining work that requires precise class centers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. The principled principal: the case of Australian Steiner schools.
- Author
-
Eacott, Scott
- Subjects
SCHOOL administration ,DATA reduction ,LEGAL authorities ,DECISION making - Abstract
Purpose: Steiner schools represent a natural experiment in the provision of schooling. With a history dating back more than 100 years, leadership, leaders and the principal do not sit easily with Steiner educators. The contemporary regulatory environment requires a "principal" or legal authority at the school-building level, creating a tension for Steiner schools. This makes Steiner schools an ideal case study for understanding the contemporary role of the principal. Design/methodology/approach: This paper is based on an interview-based study with 24 heads of Australian Steiner schools. Conducted on Microsoft Teams, all by the principal investigator, the interviews generated a 171,742-word corpus subjected to an inductive analytical approach. Data reduction led to four themes, and this paper focuses on one (principles not prescription) and its implications for the principalship and school governance. Findings: Embedding the principalship in a philosophy (or theory) of education re-couples school administration with schooling and bases decision-making in principles rather than individuals. It also alters the role of data and evidence from accountability to justifying principles. Research limitations/implications: Rather than a focus on individuals or roles, this paper argues that the underlying principles of organisational decision-making should be the central focus of research. Practical implications: Ensuring organisational coherence, by balancing the diversity of positions on core principles is the core task of the contemporary principal. Originality/value: Exploiting natural experiments in the provision of schooling makes it possible to argue for how schooling, and specifically the principalship, can be different. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. A Mutual Information Based Adaptive Windowing of Informative EEG for Emotion Recognition.
- Author
-
Piho, Laura and Tjahjadi, Tardi
- Abstract
Emotion recognition using brain wave signals involves using high dimensional electroencephalogram (EEG) data. In this paper, a window selection method based on mutual information is introduced to select an appropriate signal window to reduce the length of the signals. The motivation of the windowing method comes from EEG emotion recognition being computationally costly and the data having low signal-to-noise ratio. The aim of the windowing method is to find a reduced signal where the emotions are strongest. In this paper, it is suggested, that using only the signal section which best describes emotions improves the classification of emotions. This is achieved by iteratively comparing different-length EEG signals at different time locations using the mutual information between the reduced signal and emotion labels as criterion. The reduced signal with the highest mutual information is used for extracting the features for emotion classification. In addition, a viable framework for emotion recognition is introduced. Experimental results on publicly available datasets, DEAP and MAHNOB-HCI, show significant improvement in emotion recognition accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Space reduction constraints for the median of permutations problem.
- Author
-
Milosz, Robin and Hamel, Sylvie
- Subjects
- *
PERMUTATIONS , *MEDIAN (Mathematics) , *CONFERENCE papers , *SPACE , *DATA reduction - Abstract
Given a set A ⊆ S n of m permutations of { 1 , 2 , ... , n } and a distance function d , the median problem consists of finding a permutation π ∗ that is the "closest" of the m given permutations. Here, we study the problem under the Kendall- τ distance which counts the number of order disagreements between pairs of elements of two permutations. This problem has been proved to be NP-hard when m ≥ 4 , m even. In this article, which is a full version of the conference paper Milosz and Hamel (2016), we investigate new theoretical properties of A that solve the relative order between pairs of elements in median permutations of A , thus drastically reducing the search space of the problem. The resulting preprocessing of the problem is implemented with a Branch-and-Bound solving algorithm. We analyze its performance on randomly generated data and on real data. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Malware detection for IoT devices using hybrid system of whitelist and machine learning based on lightweight flow data.
- Author
-
Nakahara, Masataka, Okui, Norihiro, Kobayashi, Yasuaki, Miyake, Yutaka, and Kubota, Ayumu
- Subjects
HYBRID systems ,MACHINE learning ,INTERNET of things ,DATA reduction ,MALWARE ,DATA transmission systems - Abstract
For the security of IoT devices, the number and type of devices are generally large, so it is important to collect data efficiently and detect threats in a lightweight way. In this paper, we propose the architecture for malware detection, a method to detect malware using flow information, and a method to decrease the amount of transmission data between the servers in this architecture. We evaluate the performance of malware detection and the amount of data before and after the data reduction. And show that the performance of malware detection is maintained even though the amount of data is reduced. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Online evolution of a phased array for ultrasonic imaging by a novel adaptive data acquisition method.
- Author
-
Lukacs, Peter, Stratoudaki, Theodosia, Davis, Geo, and Gachagan, Anthony
- Subjects
PHASED array antennas ,ULTRASONIC arrays ,ACQUISITION of data ,ULTRASONIC imaging ,DATA reduction ,TIME measurements - Abstract
Ultrasonic imaging, using ultrasonic phased arrays, has an enormous impact in science, medicine and society and is a widely used modality in many application fields. The maximum amount of information which can be captured by an array is provided by the data acquisition method capturing the complete data set of signals from all possible combinations of ultrasonic generation and detection elements of a dense array. However, capturing this complete data set requires long data acquisition time, large number of array elements and transmit channels and produces a large volume of data. All these reasons make such data acquisition unfeasible due to the existing phased array technology or non-applicable to cases requiring fast measurement time. This paper introduces the concept of an adaptive data acquisition process, the Selective Matrix Capture (SMC), which can adapt, dynamically, to specific imaging requirements for efficient ultrasonic imaging. SMC is realised experimentally using Laser Induced Phased Arrays (LIPAs), that use lasers to generate and detect ultrasound. The flexibility and reconfigurability of LIPAs enable the evolution of the array configuration, on-the-fly. The SMC methodology consists of two stages: a stage for detecting and localising regions of interest, by means of iteratively synthesising a sparse array, and a second stage for array optimisation to the region of interest. The delay-and-sum is used as the imaging algorithm and the experimental results are compared to images produced using the complete generation-detection data set. It is shown that SMC, without a priori knowledge of the test sample, is able to achieve comparable results, while preforming ∼ 10 times faster data acquisition and achieving ∼ 10 times reduction in data size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Methodological concerns in case-based research in industrial engineering: revisiting the challenges towards further recommendations.
- Author
-
Augusto Cauchick-Miguel, Paulo, Tavares Sousa-Zomer, Thayla, and Tortorella, Guilherme
- Subjects
LITERATURE reviews ,INDUSTRIAL research ,RESEARCH personnel ,INDUSTRIAL engineering ,DATA reduction ,RESEARCH methodology - Abstract
Paper aims: This paper addresses difficulties among the Brazilian scholarly community in industrial engineering (IE) when conducting case-based research. It also provides further recommendations to increase methodological rigour. Originality: The paper contributes to the practice of case research by providing a historical perspective of research methodology in Brazil and offering guidance to improve case research adoption as well as the methodological rigour. Research method: The main challenges when conducting case research were first identified through a literature review. Then, an exploratory survey with Brazilian scholars was conducted to identify challenges perceived by those. Recommendations are then provided, especially regarding the data analysis stage. The recommendations are discussed in the light of the existing literature and based on authors' experience in conducting qualitative research. Main findings: Difficulties when conducting case research identified by scholars can be classified in three 'Aquila's hells': (i) weak theoretical background, (ii) careless case study design/planning; and (iii) fragile/uncertain data analysis. Suggestions to improve the data analysis process consist of building a narrative, data reduction, improving coding, etc. Improving validity is also necessary. Implications for theory and practice: The recommendations are especially meaningful to early-stage researchers and provide guidance to improve robustness when conducting case research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. A bidirectional reversible and multilevel location privacy protection method based on attribute encryption.
- Author
-
Hu, Zhaowei, Hu, Kaiyi, and Hasan, Milu Md Khaled
- Subjects
LOCATION data ,QUALITY of service ,DATA reduction ,PRIVATE security services ,TRUST ,ACCESS control - Abstract
Various methods such as k-anonymity and differential privacy have been proposed to safeguard users' private information in the publication of location service data. However, these typically employ a rigid "all-or-nothing" privacy standard that fails to accommodate users' more nuanced and multi-level privacy-related needs. Data is irrecoverable once anonymized, leading to a permanent reduction in location data quality, in turn significantly diminishing data utility. In the paper, a novel, bidirectional and multi-layered location privacy protection method based on attribute encryption is proposed. This method offers layered, reversible, and fine-grained privacy safeguards. A hierarchical privacy protection scheme incorporates various layers of dummy information, using an access structure tree to encrypt identifiers for these dummies. Multi-level location privacy protection is achieved after adding varying amounts of dummy information at different hierarchical levels N. This allows for precise control over the de-anonymization process, where users may adjust the granularity of anonymized data based on their own trust levels for multi-level location privacy protection. This method includes an access policy which functions via an attribute encryption-based access control system, generating decryption keys for data identifiers according to user attributes, facilitating a reversible transformation between data anonymity and de-anonymity. The complexities associated with key generation, distribution, and management are thus markedly reduced. Experimental comparisons with existing methods demonstrate that the proposed method effectively balances service quality and location privacy, providing users with multi-level and reversible privacy protection services. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Abridged spectral matrix inversion: parametric fitting of X-ray fluorescence spectra following integrative data reduction
- Author
-
Simon J. George, Ben Huntsman, Ingrid J. Pickering, Graham N. George, Cheyenne D. Kiani, Olena Ponomarenko, Monica Y. Weng, and Andrew M. Crawford
- Subjects
Nuclear and High Energy Physics ,Astrophysics::High Energy Astrophysical Phenomena ,X-ray fluorescence ,01 natural sciences ,Fluorescence ,Spectral line ,law.invention ,010309 optics ,Matrix (mathematics) ,law ,0103 physical sciences ,010306 general physics ,Instrumentation ,Parametric statistics ,Physics ,Radiation ,X-Rays ,Research Papers ,Synchrotron ,Computational physics ,Radiography ,Linear algebra ,Algorithms ,Synchrotrons ,Energy (signal processing) ,Data reduction - Abstract
Recent improvements in both X-ray detectors and readout speeds have led to a substantial increase in the volume of X-ray fluorescence data being produced at synchrotron facilities. This in turn results in increased challenges associated with processing and fitting such data, both temporally and computationally. Herein an abridging approach is described that both reduces and partially integrates X-ray fluorescence (XRF) data sets to obtain a fivefold total improvement in processing time with negligible decrease in quality of fitting. The approach is demonstrated using linear least-squares matrix inversion on XRF data with strongly overlapping fluorescent peaks. This approach is applicable to any type of linear algebra based fitting algorithm to fit spectra containing overlapping signals wherein the spectra also contain unimportant (non-characteristic) regions which add little (or no) weight to fitted values, e.g. energy regions in XRF spectra that contain little or no peak information.
- Published
- 2021
34. An efficiency-improved GPU algorithm for the 2 + 2 + 1 method in nonlinear beamforming.
- Author
-
Sun, Yimin, Silvestrov, Ilya, and Bakulin, Andrey
- Subjects
APPROXIMATION algorithms ,SIGNAL processing ,DATA reduction ,NONLINEAR equations ,DATA quality - Abstract
Nonlinear beamforming (NLBF) has emerged as a highly effective technology for enhancing seismic data quality. The crux of NLBF's success lies in its ability to robustly estimate local traveltime operators directly from input data, a process that entails solving millions or even billions of nonlinear optimization problems per input gather. Among the solvers used for estimating these operators is the 2 + 2 + 1 method, for which we have previously introduced algorithmic implementations on both the CPU and GPU platforms. In this paper, we present an efficiency-improved GPU algorithm for the 2 + 2 + 1 method, particularly beneficial when dealing with small data apertures in NLBF. Our enhanced GPU algorithm brings significant improvements in computation efficiency through several strategic measures, which include leveraging Horner's method to minimize the mathematical overhead of traveltime calculation, implementing a GPU-friendly data reduction algorithm to exploit GPU computational power, and optimizing shared GPU memory usage as the primary workspace whenever feasible. To demonstrate the tangible efficiency enhancement achieved by our new GPU algorithm, via two illustrative examples, we compare its performance with that of our previous implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Low-Power Preprocessing System at MCU-Based Application Nodes for Reducing Data Transmission.
- Author
-
Kim, Donguk, Roh, Chanhwi, Baek, Donkyu, and Choi, Seong-gon
- Subjects
COMPUTER hardware description languages ,DATA transmission systems ,LOGIC design ,EDGE computing ,DATA reduction - Abstract
Edge computing enables prompt responses in IoT environments, such as the operation of autonomous vehicles and unmanned aerial vehicles. However, with the increase in sensor nodes, the computational burden on the computing node also increases. Specifically, data filtering and reduction at application nodes add to the energy burden for battery-operated devices. In this paper, we propose a preprocessing system at the application node that requires low power consumption for data transmission reduction. Based on our simulations, we identify the minimum data size needed to preserve the signal. We first design the preprocessing system using a hardware description language to evaluate its performance. Then, we implement the open-library-based MCU system, including the proposed preprocessing IP, to assess its operation and overhead. Our implementation of the preprocessing system reduces data transmission by 50% with acceptable information loss. Additionally, the area and power consumption after the logic synthesis of the preprocessing IP within the entire MCU system are evaluated at only 3.6% and 13.1%, respectively. By performing preprocessing using the MCU and proposed IP, nearly 74.4% power reduction is achieved compared to using the existing MCU core. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Data reduction for serial crystallography using a robust peak finder
- Author
-
Anton Barty, Romain Letrun, Marco Kloos, Dominik Oberthuer, Dana Komadina, W. Brehm, Luca Gelisio, Adrian P. Mancuso, Brian Abbey, M. Galchenkova, Alireza Sadri, Henry N. Chapman, Grant Mills, Oleksandr Yefanov, Henry Kirkwood, Raphael de Wijn, Connie Darmanin, Marjan Hadian-Jazi, and Mohammad Vakili
- Subjects
0303 health sciences ,Data processing ,Discretization ,Computer science ,Detector ,Robust statistics ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Research Papers ,General Biochemistry, Genetics and Molecular Biology ,3. Good health ,Background noise ,03 medical and health sciences ,Crystallography ,Bragg peak finding ,robust statistics ,ddc:540 ,Outlier ,data reduction ,Probability distribution ,serial crystallography ,0210 nano-technology ,030304 developmental biology ,Data reduction - Abstract
This article focuses on the challenges of hit finding and data reduction in serial crystallography (SX). An effective and reliable Bragg-peak-finding method, called robust peak finder (RPF), has been developed. RPF is based on the principle of robust statistics and can be used for SX data analysis., A peak-finding algorithm for serial crystallography (SX) data analysis based on the principle of ‘robust statistics’ has been developed. Methods which are statistically robust are generally more insensitive to any departures from model assumptions and are particularly effective when analysing mixtures of probability distributions. For example, these methods enable the discretization of data into a group comprising inliers (i.e. the background noise) and another group comprising outliers (i.e. Bragg peaks). Our robust statistics algorithm has two key advantages, which are demonstrated through testing using multiple SX data sets. First, it is relatively insensitive to the exact value of the input parameters and hence requires minimal optimization. This is critical for the algorithm to be able to run unsupervised, allowing for automated selection or ‘vetoing’ of SX diffraction data. Secondly, the processing of individual diffraction patterns can be easily parallelized. This means that it can analyse data from multiple detector modules simultaneously, making it ideally suited to real-time data processing. These characteristics mean that the robust peak finder (RPF) algorithm will be particularly beneficial for the new class of MHz X-ray free-electron laser sources, which generate large amounts of data in a short period of time.
- Published
- 2021
37. Magnetic Fields of Chemically Peculiar and Related Stars. 5.Main Results of 2018 and Near-Future Prospects.
- Author
-
Romanyuk, I. I.
- Subjects
WHITE dwarf stars ,STELLAR magnetic fields ,MAGNETIC fields ,SPACE telescopes ,NOVAE (Astronomy) ,STARS ,DATA reduction - Abstract
We have surveyed about a hundred papers published in 2018 in the leading astronomical journals and related to the "Magnetic fields and physical parameters of chemically peculiar and related stars" subject area. We have considered new projects of telescopes and mounted instruments for them as well as the first results obtained with telescopes recently put into operation. We have reviewed new papers on observation methods, data reduction and analysis. Spectroscopic studies of peculiar stars: their chemical abundance and other parameters are presented in the paper. We continued conducting both classical ground-based photometric observations and high-accuracy photometry with space telescopes. Our survey pays the most attention to magnetic fields of stars. We present observations of large-scale fields of OBA stars and local fields of cool active stars. Dozens new magnetic stars have been discovered. We also consider here some observations of magnetic white dwarfs and exoplanets which are of interest within the issue under study. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. The Spanish CMS Analysis Facility at CIEMAT.
- Author
-
Cárdenas-Montes, M., Delgado Peris, A., Flix, J., Hernández, J.M., León Holgado, J., Morcillo Pérez, C., Pérez-Calero Yzquierdo, A., and Rodríguez Calonge, F.J.
- Subjects
LARGE Hadron Collider ,COMPACT muon solenoid experiment ,DATA reduction ,GRID computing ,DATA analysis - Abstract
The increasingly larger data volumes that the LHC experiments will accumulate in the coming years, especially in the High-Luminosity LHC era, call for a paradigm shift in the way experimental datasets are accessed and analyzed. The current model, based on data reduction on the Grid infrastructure, followed by interactive data analysis of manageable size samples on the physicists' individual computers, will be superseded by the adoption of Analysis Facilities. This rapidly evolving concept is converging to include dedicated hardware infrastructures and computing services optimized for the effective analysis of large HEP data samples. This paper describes the actual implementation of this new analysis facility model at the CIEMAT institute, in Spain, to support the local CMS experiment community. Our work details the deployment of dedicated highly performant hardware, the operation of data staging and caching services ensuring prompt and efficient access to CMS physics analysis datasets, and the integration and optimization of a custom analysis framework based on ROOT's RDataFrame and CMS NanoAOD format. Finally, performance results obtained by benchmarking the deployed infrastructure and software against a CMS analysis workflow are summarized. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Anomalous SAXS at P12 beamline EMBL Hamburg: instrumentation and applications
- Author
-
Karen Manalastas-Cantos, Daniel Franke, Alexey Kikhney, Dmitri I. Svergun, Andrey Yu. Gruzinov, Martin A. Schroer, Florian Schulz, Nelly R. Hajizadeh, Clement E. Blanchet, Schroer, Martin A., 1European Molecular Biology Laboratory (EMBL), Hamburg Outstation c/o DESY, Notkestrasse 85, 22607Hamburg, Germany, Manalastas-Cantos, Karen, Kikhney, Alexey G., Hajizadeh, Nelly R., Schulz, Florian, 4Institute of Physical Chemistry, University of Hamburg, Grindelallee 117, 20146Hamburg, Germany, Franke, Daniel, and Blanchet, Clement E.
- Subjects
Nuclear and High Energy Physics ,Materials science ,metalloproteins ,ASAXS ,02 engineering and technology ,010402 general chemistry ,01 natural sciences ,Data acquisition ,ddc:550 ,anomalous scattering ,Instrumentation ,Radiation ,Anomalous scattering ,Small-angle X-ray scattering ,Scattering ,DESY ,biological SAXS ,021001 nanoscience & nanotechnology ,Research Papers ,0104 chemical sciences ,Computational physics ,beamline development ,Beamline ,gold nanoparticles ,0210 nano-technology ,Storage ring ,Data reduction - Abstract
Journal of synchrotron radiation 28(3), 812-823 (2021). doi:10.1107/S1600577521003404, Small-angle X-ray scattering (SAXS) is an established method for studyingnanostructured systems and in particular biological macromolecules in solution.To obtain element-specific information about the sample, anomalous SAXS(ASAXS) exploits changes of the scattering properties of selected atoms whenthe energy of the incident X-rays is close to the binding energy of their electrons.While ASAXS is widely applied to condensed matter and inorganic systems, itsuse for biological macromolecules is challenging because of the weak anomalouseffect. Biological objects are often only available in small quantities and areprone to radiation damage, which makes biological ASAXS measurements verychallenging. The BioSAXS beamline P12 operated by the European MolecularBiology Laboratory (EMBL) at the PETRA III storage ring (DESY, Hamburg)is dedicated to studies of weakly scattering objects. Here, recent developmentsat P12 allowing for ASAXS measurements are presented. The beamline control,data acquisition and data reduction pipeline of the beamline were adaptedto conduct ASAXS experiments. Modelling tools were developed to computeASAXS patterns from atomic models, which can be used to analyze the data andto help designing appropriate data collection strategies. These developments areillustrated with ASAXS experiments on different model systems performed atthe P12 beamline., Published by Wiley-Blackwell, [S.l.]
- Published
- 2021
40. Tensor eigenvectors for projection pursuit
- Author
-
Loperfido, Nicola
- Published
- 2024
- Full Text
- View/download PDF
41. ENERGY-SAVING AND EMISSION REDUCTION SYSTEM OF DATA CENTER HEAT PIPE BASED ON LATENT HEAT OF WATER EVAPORATION.
- Author
-
Jinhui ZHAO, Panle WANG, Jingshun LI, Tianwei GU, and Jiaxu LU, D.
- Subjects
HEAT pipes ,HEATS of vaporization ,SERVER farms (Computer network management) ,GREENHOUSE gas mitigation ,DATA reduction ,LATENT heat ,ENERGY consumption ,ELECTRIC power consumption - Abstract
To address the current problem of high energy consumption in data centers, this paper proposes a data center heat pipe air-conditioning system based on the latent heat of water evaporation, which uses the latent heat of water evaporation for cooling by creating a low pressure environment to evaporate large amounts of water. In order to verify the effect of the system, a heat pipe test bench based on the latent heat of water evaporation was designed and built. Compared with the traditional heat pipe in the data center for heat dissipation, the performance and economy of the water evaporation latent heat pipe system designed in this paper are analyzed experimentally. A multi-physics coupled model of water evaporation latent heat pipe air-conditioning based on COMSOL Multiphysics was established to simulate and study the temperature field and velocity field distribution of water evaporation latent heat pipe air-conditioning system in data centers. The research shows that: -- Under the designed test conditions, compared with the traditional heat pipe system, the water evaporation latent heat pipe air conditioner can conduct 2540 kJ more heat in one day in an outdoor environment of 24 °C. -- At an ambient temperature of 3 5°C and an indoor temperature of 25.8 °C, the cooling capacity of the heat pipe in the data center water evaporation latent heat pipe air-conditioning system is twice the cooling capacity of the air conditioner, and the heat pipe can work efficiently regardless of the outdoor ambient temperature. -- The energy-saving effect of the latent heat pipe of water evaporation in the data center has a significant effect on air conditioners with an energy efficiency rating (EER) lower than 2.5-4.4. It can improve the energy efficiency of level 5 with an EER of 2.5 to level 2 with an EER of 3.22, greatly reducing the power consumption of the data center air-conditioning system. When the EER of the air conditioner exceeds 4.4, the coefficient of performance of the data center water evaporation latent heat pipe air-conditioning system will be lower than that of the air conditioner itself. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Efficient Lossy Compression for IoT Using SZ and Reconstruction with 1D U-Net.
- Author
-
Azar, Joseph, Tayeh, Gaby Bou, Makhoul, Abdallah, and Couturier, Raphaël
- Subjects
INTERNET of things ,DATA recovery ,DATA compression ,DATA reduction ,DATABASES ,IMAGE compression - Abstract
Several recent research has centered on maximizing Internet of Things (IoT) devices' lifetime by deploying data reduction techniques on IoT nodes to reduce data transmission. Data compression methods can be seen as a direct way of achieving energy efficiency. The trade-off between compression ratio and data distortion is usually considered when using a lossy compressor. This paper proposes a light SZ compressor with a maximal compression ratio without considering this trade-off. The proposed approach was tested on ESP Wroom 32 and WiFi LoRa 32 microcontrollers. Given the importance of data quality arriving at the gateway for analysis, the proposed lossy compressor with a high compression ratio can discard important data features and patterns. This paper solves this problem by proposing a method for data enhancement based on the U-Net architecture. Therefore, the contribution of this paper is twofold: (1) Efficient data reduction approach for energy optimization at the level of IoT nodes. (2) 1D U-Net-based data recovery approach at the level of the edge. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Optimized reconstruction of the crystallographic orientation density function based on a reduced set of orientations
- Author
-
Alexander Hartmaier, Ralf Hielscher, Abhishek Biswas, and Napat Vajragupta
- Subjects
crystal plasticity ,010302 applied physics ,Materials science ,Orientation (computer vision) ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Research Papers ,01 natural sciences ,General Biochemistry, Genetics and Molecular Biology ,Finite element method ,micromechanical modeling ,Crystallography ,Error function ,texture reconstruction ,Data point ,0103 physical sciences ,Texture (crystalline) ,ddc:620 ,integer approximation ,0210 nano-technology ,Anisotropy ,Data reduction ,Electron backscatter diffraction - Abstract
In this work, a new method is developed to reconstruct the orientation distribution function from experimental data with a set of equally weighted orientations. It is based on the deterministic integer approximation method, but it minimizes the shortcomings of this method by optimizing the orientation grid and kernel function., Crystallographic textures, as they develop for example during cold forming, can have a significant influence on the mechanical properties of metals, such as plastic anisotropy. Textures are typically characterized by a non-uniform distribution of crystallographic orientations that can be measured by diffraction experiments like electron backscatter diffraction (EBSD). Such experimental data usually contain a large number of data points, which must be significantly reduced to be used for numerical modeling. However, the challenge in such data reduction is to preserve the important characteristics of the experimental data, while reducing the volume and preserving the computational efficiency of the numerical model. For example, in micromechanical modeling, representative volume elements (RVEs) of the real microstructure are generated and the mechanical properties of these RVEs are studied by the crystal plasticity finite element method. In this work, a new method is developed for extracting a reduced set of orientations from EBSD data containing a large number of orientations. This approach is based on the established integer approximation method and it minimizes its shortcomings. Furthermore, the L 1 norm is applied as an error function; this is commonly used in texture analysis for quantitative assessment of the degree of approximation and can be used to control the convergence behavior. The method is tested on four experimental data sets to demonstrate its capabilities. This new method for the purposeful reduction of a set of orientations into equally weighted orientations is not only suitable for numerical simulation but also shows improvement in results in comparison with other available methods.
- Published
- 2020
44. Neural network in sports cluster analysis.
- Author
-
Zhang, Yanhua, Hou, Xuehua, and Xu, Shan
- Subjects
CLUSTER analysis (Statistics) ,DIMENSIONAL reduction algorithms ,VIDEO coding ,DATA reduction ,SPORTS ,FUZZY algorithms - Abstract
In the era of rapid development of the Internet, various types of data in daily life are becoming more and more important, as are people's sports data. How to collect and store these massive exercise data and how to extract the user's exercise habits from these data are of great significance for improving people's exercise enthusiasm. This article mainly studies the application of the network in the cluster analysis of sports. This paper proposes a combination of a neural network-based sports data analysis model and a density peak clustering algorithm for unsupervised dimensionality reduction of high-dimensional data. We design both the encoder and the decoder with a three-layer fully connected neural network structure. The encoder extracts the characteristics of the sample data, and then, the decoder approximates the original input sample. The encoder is used to reduce the dimensionality of the high-dimensional data to the middle dimension, combined with the density peak clustering algorithm to further reduce the dimensionality, and then analyze the low-dimensional data. And set the corresponding learning rate for different data sets, the three data sets are iterated 40 times. The prediction accuracy rates of the algorithm in the three data sets are 94.1%, 90.3%, and 89.6% respectively; compared with the traditional PCA dimensionality reduction method, the method in this paper can extract data features more effectively, improve the dispersion between clusters, and have a better clustering effect. Finally, a numerical example is given to illustrate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. A novel ensemble deep reinforcement learning model for short‐term load forecasting based on Q‐learning dynamic model selection.
- Author
-
He, Xin, Zhao, Wenlu, Zhang, Licheng, Zhang, Qiushi, and Li, Xinyu
- Subjects
REINFORCEMENT learning ,LOAD forecasting (Electric power systems) ,RECURRENT neural networks ,ARTIFICIAL intelligence ,DATA reduction - Abstract
Short‐term load forecasting is critical for power system planning and operations, and ensemble forecasting methods for electricity loads have been shown to be effective in obtaining accurate forecasts. However, the weights in ensemble prediction models are usually preset based on the overall performance after training, which prevents the model from adapting in the face of different scenarios, limiting the improvement of prediction performance. In order to improve the accurateness and validity of the ensemble prediction method further, this paper proposes an ensemble deep reinforcement learning approach using Q‐learning dynamic weight assignment to consider local behaviours caused by changes in the external environment. Firstly, the variational mode decomposition is used to reduce the non‐stationarity of the original data by decomposing the load sequence. Then, the recurrent neural network (RNN), long short‐term memory (LSTM), and gated recurrent unit (GRU) are selected as the basic power load predictors. Finally, the optimal weights are ensembled for the three sub‐predictors by the optimal weights generated using the Q‐learning algorithm, and the final results are obtained by combining their respective predictions. The results show that the forecasting capability of the proposed method outperforms all sub‐models and several baseline ensemble forecasting methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. ROME/REA: Three-year, Tri-color Timeseries Photometry of the Galactic Bulge.
- Author
-
Street, R. A., Bachelet, E., Tsapras, Y., Hundertmark, M. P. G., Bozza, V., Bramich, D. M., Cassan, A., Dominik, M., Figuera Jaimes, R., Horne, K., Mao, S., Saha, A., Wambsganss, J., and Zang, Weicheng
- Subjects
- *
PHOTOMETRY , *GALACTIC bulges , *VERY large array telescopes , *DATA release , *IMAGE analysis , *DATA reduction - Abstract
The Robotic Observations of Microlensing Events/Reactive Event Assessment Survey was a Key Project at Las Cumbres Observatory (hereafter LCO) which continuously monitored 20 selected fields (3.76 sq.deg) in the Galactic Bulge throughout their seasonal visibility window over a three-year period, between 2017 March and 2020 March. Observations were made in three optical passbands (SDSS − g ′ , − r ′ , − i ′ ), and LCO's multi-site telescope network enabled the survey to achieve a typical cadence of ∼10 hr in i ′ and ∼15 hr in g ′ and r ′ . In addition, intervals of higher cadence (<1 hr) data were obtained during monitoring of key microlensing events within the fields. This paper describes the Difference Image Analysis data reduction pipeline developed to process these data, and the process for combining the photometry from LCO's three observing sites in the Southern Hemisphere. The full timeseries photometry for all ∼8 million stars, down to a limiting magnitude of i ∼ 18 mag is provided in the data release accompanying this paper, and samples of the data are presented for exemplar microlensing events, illustrating how the tri-band data are used to derive constraints on the microlensing source star parameters, a necessary step in determining the physical properties of the lensing object. The timeseries data also enables a wealth of additional science, for example in characterizing long-timescale stellar variability, and a few examples of the data for known variables are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Development of Basic Knowledge Construction Technique to Reduce The Volume of High-Dimensional Big Data.
- Author
-
Karya, Gede, Sitohang, Benhard, Akbar, Saiful, and Moertini, Veronica S.
- Subjects
INFORMATION & communication technologies ,EXTRACTION techniques ,DATA warehousing ,DATA reduction ,HIGH technology ,BIG data - Abstract
Big data is growing fast following Big data has the characteristics of high volume, speed, and variety (3v) and continues to grow exponentially following the development of the world's use of information and communication technology. The main problem with big data is the data deluge. The need for technology and large data storage and processing methods to keep pace with the exponential data growth rate is potentially limitless, giving rise to the problem of exponentially increasing technology requirements as well. The weakness of previous big data analysis approaches (batch and online real time processing) is that it requires high technology (large storage, memory and processing). This paper proposes a new approach in big-data analysis by separating the basic knowledge (BK) construction process from original data with a much smaller volume (volume reduction), then analyzing it into final knowledge, thus requiring smaller/simpler analysis technology. The proposals include formulating the definition and representation of BK, developing methods for constructing BK from source data, and analyzing BK into final knowledge. We propose a BK construction method based on a knowledge extraction technique using BIRCH clustering algorithm for instance reduction, and handling high-dimensional problems by parallelizing the dimension calculation process to calculate the distance between instances. We use the Adjusted Rand Index (ARI) to measure the similarity of the final knowledge of the baseline and proposed methods. First, the BIRCH baseline was modified by parallelizing the calculations has succeeded in increasing speed from 17% to 25%. Next, the parallel BIRCH (PBIRCH) baseline was broken into BK construction and BK analysis, has succeeded in reducing volume by 96% or more and increasing speed by 43.50%, with similar final knowledge results (ARI=1). Based on these results, we conclude that the BK construction method and analysis from BK into final knowledge for highdimensional big data have significantly reduced volume and speed up the analytical process without reducing the quality of the final knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Acoustic logging array signal denoising using U-net and a case study in a TangGu oil field.
- Author
-
Fu, Xin, Gou, Yang, and Wei, Fuqiang
- Subjects
SIGNAL denoising ,CONVOLUTIONAL neural networks ,ARTIFICIAL neural networks ,NOISE control ,OIL fields ,ACOUSTIC emission testing ,DATA reduction - Abstract
This study developed a noise-reduction method for acoustic logging array signals using a deep neural network algorithm in the time-frequency domain. Initially, we derived analytical solutions for the received waveforms when the acoustic logging tool was positioned either at the centre or eccentrically within the borehole. To simulate the received waveforms across various formations, we developed a real-axis integration algorithm. Subsequently, we devised a noise-reduction algorithm workflow based on a convolutional neural network and configured the structure and parameters of the U-net using TensorFlow. To address the scarcity of open datasets, we established both signal and noise datasets. The signal dataset was generated using theoretical simulation encompassing various model parameters, while the noise dataset was collected during tool testing and downhole operations. The trained model demonstrated substantial noise-reduction capabilities during validation. To validate the effectiveness of the algorithm, we applied noise reduction to actual data collected during downhole operations in a TangGu oil field, yielding impressive results across different types of noisy data. Therefore, the U-net-based time-domain noise-reduction algorithm proposed in this paper holds the potential to significantly improve the quality of acoustic logging array signals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Efficiently approaching vertical federated learning by combining data reduction and conditional computation techniques.
- Author
-
Folino, Francesco, Folino, Gianluigi, Pisani, Francesco Sergio, Pontieri, Luigi, and Sabatino, Pietro
- Subjects
FEDERATED learning ,DATA reduction ,DEEP learning ,INTERNET security - Abstract
In this paper, a framework based on a sparse Mixture of Experts (MoE) architecture is proposed for the federated learning and application of a distributed classification model in domains (like cybersecurity and healthcare) where different parties of the federation store different subsets of features for a number of data instances. The framework is designed to limit the risk of information leakage and computation/communication costs in both model training (through data sampling) and application (leveraging the conditional-computation abilities of sparse MoEs). Experiments on real data have shown the proposed approach to ensure a better balance between efficiency and model accuracy, compared to other VFL-based solutions. Notably, in a real-life cybersecurity case study focused on malware classification (the KronoDroid dataset), the proposed method surpasses competitors even though it utilizes only 50% and 75% of the training set, which is fully utilized by the other approaches in the competition. This method achieves reductions in the rate of false positives by 16.9% and 18.2%, respectively, and also delivers satisfactory results on the other evaluation metrics. These results showcase our framework's potential to significantly enhance cybersecurity threat detection and prevention in a collaborative yet secure manner. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. The Research on Deep Learning-Driven Dimensionality Reduction and Strain Prediction Techniques Based on Flight Parameter Data.
- Author
-
Huang, Wenbo, Wang, Rui, Zhang, Mengchuang, and Yin, Zhiping
- Subjects
DEEP learning ,STRUCTURAL health monitoring ,DATA reduction ,FORECASTING - Abstract
Loads and strains in critical areas play a crucial role in aircraft structural health monitoring, the tracking of individual aircraft lifespans, and the compilation of load spectra. Direct measurement of actual flight loads presents challenges. This process typically involves using load-strain stiffness matrices, derived from ground calibration tests, to map measured flight parameters to loads at critical locations. Presently, deep learning neural network methods are rapidly developing, offering new perspectives for this task. This paper explores the potential of deep learning models in predicting flight parameter loads and strains, integrating the methods of flight parameter preprocessing techniques, flight maneuver recognition (FMR), virtual ground calibration tests for wings, dimensionality reduction of flight data through Autoencoder (AE) network models, and the application of Long Short-Term Memory (LSTM) network models to predict strains. These efforts contribute to the prediction of strains in critical areas based on flight parameters, thereby enabling real-time assessment of aircraft damage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.