1,645 results on '"A-weighting"'
Search Results
2. A Weighting Method for Feature Dimension by Semisupervised Learning With Entropy
- Author
-
Murong Yang, Shihui Ying, Ziyan Qin, Jigen Peng, and Dequan Jin
- Subjects
Computer Networks and Communications ,business.industry ,Dimensionality reduction ,Pattern recognition ,A-weighting ,Class (biology) ,Computer Science Applications ,Weighting ,Feature Dimension ,Artificial Intelligence ,Feature (computer vision) ,Metric (mathematics) ,Artificial intelligence ,Entropy (energy dispersal) ,business ,Software ,Mathematics - Abstract
In this article, a semisupervised weighting method for feature dimension based on entropy is proposed for classification, dimension reduction, and correlation analysis. For real-world data, different feature dimensions usually show different importance. Generally, data in the same class are supposed to be similar, so their entropy should be small; and those in different classes are supposed to be dissimilar, so their entropy should be large. According to this, we propose a way to construct the weights of feature dimensions with the whole entropy and the innerclass entropies. The weights indicate the contribution of their corresponding feature dimensions in classification. They can be used to improve the performance of classification by giving a weighted distance metric and can be applied to dimension reduction and correlation analysis as well. Some numerical experiments are given to test the proposed method by comparing it with some other representative methods. They demonstrate that the proposed method is feasible and efficient in classification, dimension reduction, and correlation analysis.
- Published
- 2023
- Full Text
- View/download PDF
3. Investigating impacts of various operational conditions on fuel consumption and stop penalty at signalized intersections
- Author
-
Suhaib Alshayeb, Justin R. Effinger, and Aleksandar Stevanovic
- Subjects
Computer science ,Traffic simulation ,Transportation ,K factor ,A-weighting ,Management, Monitoring, Policy and Law ,Signal timing ,Reduction (complexity) ,Control theory ,Automotive Engineering ,Trajectory ,Fuel efficiency ,Intersection (aeronautics) ,Civil and Structural Engineering - Abstract
When optimizing signals relevant traffic agencies adopt policies to either improve mobility performance measures (e.g., delay and stops), environmental aspects, or safety of signalized intersections. One of such policies, mainly implemented through so-called Performance Index (PI), advocates that reduction of excess fuel consumption (FC) should be achieved by minimizing the PI - a linear combination of delays and stops. The key factor of such a PI is the stop penalty “K”, which represents a weighting factor, or a stop equivalency measured in seconds of delay. In the contemporary signal optimization practice, this K is given a constant value (e.g., 10 seconds) and it is not recognized as a parameter that is dependent on various operational conditions. This study challenges that common view and presents a methodology to derive the K factor and investigate impacts of various operational conditions (e.g., cruising speed) on the K value. The study uses traffic simulation model coupled with a modal fuel consumption and emission model to investigate second-by-second FC during stopping events at a signalized intersection. The experiments are performed on a hypothetical, yet realistic, intersection under several operational scenarios. The findings show that the K varies significantly with all the investigated operational conditions. More importantly, results indicate that the K factor should be much larger than used by current signal timing practices. The implications of these findings may lead to significant changes in current polices for signal timing optimization. Future research shall validate these findings with second-by-second vehicle trajectory and FC data from the field.
- Published
- 2022
- Full Text
- View/download PDF
4. A Novel Technique for Robust Training of Deep Networks With Multisource Weak Labeled Remote Sensing Data
- Author
-
Lorenzo Bruzzone and Gianmarco Perantoni
- Subjects
Complex data type ,Digital mapping ,Generalization ,Computer science ,business.industry ,Reliability (computer networking) ,Deep learning ,Process (computing) ,A-weighting ,Robustness (computer science) ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Remote sensing - Abstract
Deep learning has gained broad interest in remote sensing image scene classification thanks to the effectiveness of deep neural networks in extracting the semantics from complex data. However, deep networks require large amounts of training samples to obtain good generalization capabilities and are sensitive to errors in the training labels. This is a problem in remote sensing since highly reliable labels can be obtained at high costs and in limited amount. However, many sources of less reliable labeled data are available, e.g., obsolete digital maps. In order to train deep networks with larger datasets, we propose both the combination of single or multiple weak sources of labeled data with a small but reliable dataset to generate multisource labeled datasets and a novel training strategy where the reliability of each source is taken into consideration. This is done by exploiting the transition matrices describing the statistics of the errors of each source. The transition matrices are embedded into the labels and used during the training process to weigh each label according to the related source. The proposed method acts as a weighting scheme at gradient level, where each instance contributes with different weights to the optimization of different classes. The effectiveness of the proposed method is validated by experiments on different datasets. The results proved the robustness and capability of leveraging on unreliable source of labels of the proposed method.
- Published
- 2022
- Full Text
- View/download PDF
5. Modelling speaker-size discrimination with voiced and unvoiced speech sounds based on the effect of spectral lift
- Author
-
Roy D. Patterson, Kodai Yamamoto, Hideki Kawahara, Toshie Matsui, Ryo Uemura, and Toshio Irino
- Subjects
Linguistics and Language ,Systematic difference ,Lift (data mining) ,Just-noticeable difference ,Communication ,Speech recognition ,Speech sounds ,Wavelet transform ,A-weighting ,Language and Linguistics ,Computer Science Applications ,Weighting ,Modeling and Simulation ,Computer Vision and Pattern Recognition ,Size Perception ,Software ,Mathematics - Abstract
We can estimate the size of a speaker solely from their speech sounds, regardless of whether the sounds are voiced or unvoiced. In this study, we developed a size perception model based on the computational theory of the stabilised wavelet transform (SWT) to explain a variety of size discrimination data. We also conducted extended experiments to evaluate the effect of spectral lift on speaker size discrimination, from voiced and unvoiced speech sounds. The just noticeable difference (JND) and the point of subjective equality (PSE) for speaker size discrimination were compared between speech sounds with natural and lifted spectra. On average, listeners tended to judge that the lifted speech came from a smaller speaker. The PSE, which indicates the systematic difference in perceived size, shifted by approximately 10% (Exp. 1) for unvoiced speech sounds, and by approximately 5% (Exp. 2) for voiced speech sounds. The JND depended on the spectral lift for unvoiced sounds, but not with voiced sounds. At the same time, it was noted that there were large differences between listeners: some listeners’ judgements were affected by the spectral lift, while others were not. We constructed a size discrimination model to explain all of the experimental results with listener dependence for voiced and unvoiced speech sounds. We introduced a weighting function, based on the Size-Shape Image (SSI) in the SWT, which reduces the influence of resolved harmonics caused by the glottal pulse sequence in voiced speech. As a result, the model with the SSI weighting function predicted fairly well the individual listener’s data, whether the judgements were affected by the spectral lift or not, and whether the speech sounds were voiced or unvoiced. The optimum choice of one parameter, that is, the spectral compensation coefficient, enabled us to explain the data of all individuals.
- Published
- 2022
- Full Text
- View/download PDF
6. A Weighted Sample Framework to Incorporate External Calculators for Risk Modeling
- Author
-
Michael S. Sabel and Debashis Ghosh
- Subjects
Statistics and Probability ,Optimization problem ,Basis (linear algebra) ,Computer science ,business.industry ,Sample (statistics) ,A-weighting ,Machine learning ,computer.software_genre ,Biochemistry, Genetics and Molecular Biology (miscellaneous) ,Term (time) ,law.invention ,Calculator ,Order (exchange) ,law ,Convex optimization ,Artificial intelligence ,business ,computer - Abstract
Personalized risk prediction calculators abound in medicine, and they carry important information about the effect of prognostic factors on outcomes of interest. How to use that information in order to analyze local datasets is a pressing question, and several recent proposals have attempted to pool information from external calculators to local datasets using parameter sharing approaches. Here, we adopt a weighting approach using convex optimization in order to transfer information. Rather than directly modeling parameters, we instead pool information on a per-sample basis. In particular, we develop prediction-guided analyses, along with an attendant inferential strategy, for incorporating information from the external risk calculator. We also supplement this analytical approach with an exploratory technique using trees to describe what we term as ‘calculator-guided observations.’ In addition, the optimization problem itself can yield insights on the potential transferability of the external calculator to the local dataset. The methodology is illustrated by simulation studies as well as an application of risk calculators to the prediction of sentinel lymph node positivity in melanoma.
- Published
- 2021
- Full Text
- View/download PDF
7. Framework for fire risk assessment of bridges
- Author
-
Mustesin Ali Khan, Ghazanfar Ali Anwar, Asif Usmani, and Aatif Ali Khan
- Subjects
Computer science ,Vulnerability ,Analytic hierarchy process ,Building and Construction ,A-weighting ,Bridge (interpersonal) ,Fire risk ,Architecture ,Fire protection ,Forensic engineering ,Economic impact analysis ,Safety, Risk, Reliability and Quality ,Economic consequences ,Civil and Structural Engineering - Abstract
Bridge fires are a major concern because of their social and economic consequences when bridges have to be closed to traffic. The concern for life safety is not significant as there are minimum reported fatalities during a bridge fire but can result in a huge economic and social consequence. Despite the frequency and consequences of bridge fires, they have been the subject of very few studies and are neglected in the different international bridge design standards. This paper presents a framework for evaluating the fire risk to the bridges. Fire risk is estimated by considering various criteria such as the social and economic impact of fire, the vulnerability of bridge structures to fire and the likelihood of a bridge fire. In this framework, each criterion, sub-criteria and alternative which can influence the fire risk of a bridge are assigned with a weighting value depending upon their importance. Analytical hierarchy process (AHP) is utilised to estimate the weightings for different factors. The proposed framework is implemented and validated using previous fire accident data. Six bridge fire incidents are considered in this study and the damage level experienced by them is found in compliance with the damage level associated with the fire risk estimated by the proposed framework. This framework presents an important methodology for the highway department and bridge engineers to estimate the fire risk for a particular bridge or entire bridge network in a region. An accurate estimation of fire risk helps the highway engineers to calculate the amount of fire protection required for bridge structures.
- Published
- 2021
- Full Text
- View/download PDF
8. Using a dual-frame design to improve phone surveys on political attitudes: developing a weighting strategy for limited external information in Hong Kong
- Author
-
Po-san Wan, Victor Zheng, and Kevin Wong
- Subjects
Statistics and Probability ,Estimation ,Phone ,Computer science ,Mobile phone ,Frame (networking) ,Econometrics ,General Social Sciences ,Estimator ,A-weighting ,Landline ,Weighting - Abstract
In recent years, rapid increases in mobile phone ownership and decreases in landline users have led to potential biases in landline phone survey estimations. Mobile-only users have been found to be over-represented in many mobile phone surveys. A dual-frame survey, namely a combination of a landline and mobile phone survey, is proposed to solve this problem. The design of such a survey requires a more complex weighting procedure and thus additional benchmark information on phone status and usages for weighting. There is no consensus on a standard weighting method, but there is a general agreement that it should include: (1) a computation of base weight and (2) a post-stratification adjustment. Various weighting methods for a dual-frame phone survey of political attitudes using empirical data from a study on political attitudes in Hong Kong were investigated in this study. We found that the average estimator and the single-frame estimator methods are the best approaches for computing the base weight for a dual-frame survey, and that they provide similar estimations on political attitudes. No significant difference in estimation on political attitudes was found between using only gender and age for the post-stratification adjustment and including gender, age, education, and working status. Cell weighting and raking provided similar estimations.
- Published
- 2021
- Full Text
- View/download PDF
9. Spatially-Weighted Factor Analysis for Extraction of Source-Oriented Mineralization Feature in 3D Coordinates of Surface Geochemical Signal
- Author
-
Shahram Hosseini, Yannick Deville, Seyed Hassan Tabatabaei, Emmanuel John M. Carranza, and Saeid Esmaeiloghli
- Subjects
Permutation (music) ,Basis (linear algebra) ,Covariance matrix ,Feature (computer vision) ,A-weighting ,Function (mathematics) ,Biological system ,Signal ,Geology ,Eigenvalues and eigenvectors ,General Environmental Science - Abstract
This contribution proposes a spatially weighted factor analysis (SWFA) to recognize effectively the underlying mineralization-related feature(s) in geochemical signals. The 3D spatial properties of the sampled surficial earth materials provide the opportunity to orient the results toward the potential sources through defining the proper localization functions. Perceiving the hydrothermal alterations as mineralization-indicating vectors in geochemical systems, a weighting function integrates the distance to prospective alteration zones and the geometry of productive geo-objects into a monolith formulation to achieve the source-oriented results. A covariance matrix tuned by the system localization function reformulates the standard factor analysis (FA) model to manifest source-oriented mineralization factor(s). The established mathematical setting was adapted to the compositional nature of multi-elemental signals and implemented via a combination of programming in MATLAB platform and R packages. An experiment on a porphyry Cu deposit was subjected to the outlined procedure for performance appraisal and comparison with the FA. The results indicated that the use of a weighting function configures the permutation of eigenvalues in such a way as to reflect spatial zoning from proximal to distal signals while providing clearly interpretable eigenvectors for ore-forming elements. By amplifying the signal of interest and reducing the signal of uncalled-for geo-processes, SWFA modulates the frequency distribution and spatial continuity of the feature of interest in such a way that the continuous-value mineralization landscape is allowed to be more consistent with the subsurface metallogenic reality in the survey area. A receiver operating characteristic analysis was adopted to evaluate quantitatively the factorized signal in predicting the mineralized ground to narrow down on the target areas. The results revealed a significant spatial coincidence between the source-oriented metallogenic pattern and mineralization evidence, implying superiority over the model derived by standard FA. The suggested scheme holds potential to information as a more efficient basis for follow-up exploration.
- Published
- 2021
- Full Text
- View/download PDF
10. Modeling and prediction for diesel performance based on deep neural network combined with virtual sample
- Author
-
Li Bingqiang, Zheng Hainan, Zhenhuan Dou, Jinfeng Liu, Kang Chao, Zan Liu, Honggen Zhou, and Yu Chen
- Subjects
Multidisciplinary ,Artificial neural network ,Computer science ,Science ,Condition monitoring ,A-weighting ,Diesel engine ,Automotive engineering ,Article ,Mechanical engineering ,Diesel fuel ,Brake specific fuel consumption ,Noise ,Engineering ,Medicine ,Parametric statistics - Abstract
The performance models are the critical step for condition monitoring and fault diagnosis of diesel engines, and are an important bridge to describe the link between input parameters and targets. Large-scale experimental methods with higher economic costs are often adopted to construct accurate performance models. To ensure the accuracy of the model and reduce the cost of the test, a novel method for modeling the performances of marine diesel engine is proposed based on deep neural network method coupled with virtual sample generation technology. Firstly, according to the practical experience, the four parameters including speed, power, lubricating oil temperature and pressure are selected as the input factors for establishing the performance models. Besides, brake specific fuel consumption, vibration and noise are adopted to assess the status of marine diesel engine. Secondly, small sample experiments for diesel engine are performed under multiple working conditions. Moreover, the experimental sample data are diffused for obtaining valid extended data based on virtual sample generation technology. Then, the performance models are established using the deep neural network method, in which the diffusion data set is adopted to reduce the cost of testing. Finally, the accuracy of the developed model is verified through experiment, and the parametric effects on performances are discussed. The results indicate that the overall prediction accuracy is more than 93%. Moreover, power is the key factor affecting brake specific fuel consumption with a weighting of 30% of the four input factors. While speed is the key factor affecting vibration and noise with a weighting of 30% and 30.5%, respectively.
- Published
- 2021
11. A novel approach for solving stochastic problems with multiple objective functions
- Author
-
Fatima Bellahcene and Ramzi Kasri
- Subjects
Mathematical optimization ,Multivariate statistics ,Quadratic problem ,Computer science ,MathematicsofComputing_NUMERICALANALYSIS ,Regular polygon ,A-weighting ,Function (mathematics) ,Management Science and Operations Research ,Expected value ,Stochastic programming ,Computer Science Applications ,Theoretical Computer Science ,Multiple objective - Abstract
In this paper we suggest an approach for solving a multiobjective stochastic linear programming problem with normal multivariate distributions. Our approach is a combination between a multiobjective method and a nonconvex technique. The problem is first transformed into a deterministic multiobjective problem introducing the expected value criterion and an utility function that represents the decision makers preferences. The obtained problem is reduced to a mono-objective quadratic problem using a weighting method. This last problem is solved by DC (Difference of Convex) programming and DC algorithm. A numerical example is included for illustration.
- Published
- 2021
- Full Text
- View/download PDF
12. A Robust Predictive Torque and Flux Control for IPM Motor Drives Without a Cost Function
- Author
-
Sadegh Vaez-Zadeh and Mohammad A. Khalilzadeh
- Subjects
Computer science ,020208 electrical & electronic engineering ,02 engineering and technology ,A-weighting ,AC motor ,Weighting ,Robustness (computer science) ,Control theory ,Control system ,0202 electrical engineering, electronic engineering, information engineering ,Inverter ,Torque ,Electrical and Electronic Engineering ,Parametric statistics - Abstract
Finite control set model predictive torque and flux control of ac motor drives commonly selects optimum inverter switching states through a cost function with a required weighting factor. The weighting factor should be well-tuned in accordance with the motor specifications and operating points. Additionally, the control performance of the drive is degraded under the parametric uncertainties. In this article, a predictive torque and flux control method is proposed for interior permanent magnet synchronous motor drives without a cost function. As a result, the tuning of the weighting factor is avoided. Besides, an estimation scheme based on a data-driven model is adopted, which enhances the robustness of the drive against parametric uncertainties. Instead of directly using the motor parameters, the input and output data of the control system, i.e., voltage and current, are used for estimating three new coefficients employed in the predictions. The performance of an interior permanent magnet synchronous motor drive under the proposed control method is evaluated and compared with those under the conventional predictive torque control method and a recently developed predictive torque control method without a weighting factor. The results confirm the effectiveness of the proposed control method and its superiority over the other two methods.
- Published
- 2021
- Full Text
- View/download PDF
13. Optimizing the prototypes with a novel data weighting algorithm for enhancing the classification performance of fuzzy clustering
- Author
-
Nie Weike, Kaijie Xu, Witold Pedrycz, and Zhiwu Li
- Subjects
0209 industrial biotechnology ,Fuzzy clustering ,Logic ,Process (computing) ,02 engineering and technology ,Construct (python library) ,A-weighting ,Function (mathematics) ,computer.software_genre ,Weighting ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Data mining ,Cluster analysis ,computer ,Mathematics - Abstract
Fuzzy clustering is regarded as an unsupervised learning process that constitutes a prerequisite for many other data mining techniques. Deciding how to classify data efficiently and accurately has been one of the topics pursued by many researchers. We anticipate that the classification performance of the clustering is strongly dependent on the boundary data (viz. data located at the boundaries of the clusters). The boundary data hold some levels of uncertainties and as such contain more information than others. Usually the greater the uncertainty, the more information contained in such data. To improve the quality of clustering, this study develops an augmented scheme of fuzzy clustering, in which a novel weighted data-based fuzzy clustering is proposed. In the introduced scheme, a dataset is composed of boundary data and non-boundary data. The partition matrix is used to determine the boundary data and the non-boundary data to be next considered in the clustering process. Then, we assign different weights to each datum to construct the weighted data. During this process, we make the weights for the boundary data and the non-boundary data different, which makes the contributions of the boundary data and the non-boundary data to the prototypes being reduced and enhanced, respectively. Furthermore, we build a weighting function to determine the weights of the data. The weighted data are used to optimize the prototypes. With the optimized prototypes, the partition matrix can be refined, which ultimately makes the boundaries of the clusters optimized. Finally, the classification performance of fuzzy clustering is enhanced. We offer a thorough analysis of the developed scheme. Comprehensive experimental studies involving synthetic and publicly available datasets are reported to demonstrate the performance of the proposed approach.
- Published
- 2021
- Full Text
- View/download PDF
14. Robust control of uncertain robotic systems: An adaptive friction compensation approach
- Author
-
Zhisheng Duan, Qingyun Wang, Qishao Wang, and Han Zhuang
- Subjects
ComputingMethodologies_SIMULATIONANDMODELING ,Computer science ,Property (programming) ,General Engineering ,Process (computing) ,02 engineering and technology ,A-weighting ,010402 general chemistry ,021001 nanoscience & nanotechnology ,01 natural sciences ,Stability (probability) ,0104 chemical sciences ,Compensation (engineering) ,Weighting ,Control theory ,Trajectory ,General Materials Science ,InformationSystems_MISCELLANEOUS ,Robust control ,0210 nano-technology ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper solves the robust control problem of robotic manipulator systems with uncertain dynamics by friction compensation approach. A weighting factor is introduced to distinguish the role of friction in control process by comparing the directions of sliding vector and friction. Utilizing the weighting factor, model-based and model-free adaptive friction compensation controllers are designed to achieve asymptotical tracking of the desired joint-space trajectory according to the knowledge of friction. The damping property of friction is fully used to improve the control performance by compensating the friction harmful for the stability, and on the other hand, utilizing the beneficial friction. Numerical simulations are given to demonstrate the control performance of the proposed approach.
- Published
- 2021
- Full Text
- View/download PDF
15. Comparison of the Quality of Various Polychromatic and Monochromatic Dual-Energy CT Images with or without a Metal Artifact Reduction Algorithm to Evaluate Total Knee Arthroplasty
- Author
-
Ji Yeon Han, Young Jin Heo, Dong Wook Kim, Yoo Jin Lee, Hye Jung Choo, Jin Wook Baek, and Sun Joo Lee
- Subjects
Image Series ,Male ,Metal artifact reduction ,Image quality ,Total knee arthroplasty ,Prosthesis ,A-weighting ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Metal Artifact ,0302 clinical medicine ,Quality (physics) ,Medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Arthroplasty, Replacement, Knee ,Aged ,business.industry ,Musculoskeletal Imaging ,Weighting ,Dual-energy CT ,Metals ,030220 oncology & carcinogenesis ,Virtual monochromatic imaging ,Radiographic Image Interpretation, Computer-Assisted ,Original Article ,Female ,Monochromatic color ,business ,Artifacts ,Tomography, X-Ray Computed ,Algorithm ,Algorithms - Abstract
OBJECTIVE: To compare the quality of various polychromatic and monochromatic images with or without using an iterative metal artifact reduction algorithm (iMAR) obtained from a dual-energy computed tomography (CT) to evaluate total knee arthroplasty. MATERIALS AND METHODS: We included 58 patients (28 male and 30 female; mean age [range], 71.4 [61-83] years) who underwent 74 knee examinations after total knee arthroplasty using dual-energy CT. CT image sets consisted of polychromatic image sets that linearly blended 80 kVp and tin-filtered 140 kVp using weighting factors of 0.4, 0, and -0.3, and monochromatic images at 130, 150, 170, and 190 keV. These image sets were obtained with and without applying iMAR, creating a total of 14 image sets. Two readers qualitatively ranked the image quality (1 [lowest quality] through 14 [highest quality]). Volumes of high- and low-density artifacts and contrast-to-noise ratios (CNRs) between the bone and fat tissue were quantitatively measured in a subset of 25 knees unaffected by metal artifacts. RESULTS: iMAR-applied, polychromatic images using weighting factors of -0.3 and 0.0 (P-0.3i and P0.0i, respectively) showed the highest image-quality rank scores (median of 14 for both by one reader and 13 and 14, respectively, by the other reader; p < 0.001). All iMAR-applied image series showed higher rank scores than the iMAR-unapplied ones. The smallest volumes of low-density artifacts were found in P-0.3i, P0.0i, and iMAR-applied monochromatic images at 130 keV. The smallest volumes of high-density artifacts were noted in P-0.3i. The CNRs were best in polychromatic images using a weighting factor of 0.4 with or without iMAR application, followed by polychromatic images using a weighting factor of 0.0 with or without iMAR application. CONCLUSION: Polychromatic images combined with iMAR application, P-0.3i and P0.0i, provided better image qualities and substantial metal artifact reduction compared with other image sets.
- Published
- 2021
16. Epidemic zone of COVID-19 from social media using hypergraph with weighting factor (HWF)
- Author
-
S. Pradeepa and K. R. Manjula
- Subjects
Helly property ,Hypergraph ,Social network ,business.industry ,Computer science ,Property (programming) ,Twitter data ,COVID-19 ,A-weighting ,computer.software_genre ,Article ,Theoretical Computer Science ,Weighting ,Task (project management) ,Hardware and Architecture ,Social media ,Data mining ,business ,Location ,computer ,Weighting factor ,Software ,Natural Language Processing ,Information Systems - Abstract
Online social network is one of the most prominent media that holds information about society's epidemic problem. Due to privacy reasons, most of the users will not disclose their location. Detecting the location of the tweet users is required to track the geographic location of the spreading diseases. This work aims to detect the spreading location of the COVID-19 disease from the Twitter users and content discussed in the tweet. COVID-19 is a disease caused by the "novel coronavirus." About 80% of confirmed cases recover from the disease. However, one out of every six people who get COVID-19 can become seriously ill, stated by the World health organization. Inferring the user location for identifying the spreading location for the disease is a very challenging task. This paper proposes a new technique based on a hypergraph model to detect the Twitter user's locations based on the spreading disease. This model uses hypergraph with weighting factor technique to infer the spreading disease's spatial location. The accuracy of prediction can be improved when a massive volume of streaming data is analyzed. The Helly property of the hypergraph was applied to discard less potential words from the text analysis, which claims this work of unique nature. A weighting factor was introduced to calculate the score of each location for a particular user. The location of each user is predicted based on the one that possesses the highest weighting factor. The proposed framework has been evaluated and tested for various measures like precision, recall and F-measure. The promising results obtained have substantiated the claim for this work compared to the state-of-the-art methodologies.
- Published
- 2021
- Full Text
- View/download PDF
17. THE COMPARISON OF VIKOR AND MAUT METHODS IN THE SELECTION OF USED CARS
- Author
-
Nurul Rahmadani and Risnawati Risnawati
- Subjects
Decision support system ,Operations research ,VIKOR method ,Computer science ,ComputerSystemsOrganization_MISCELLANEOUS ,A-weighting ,Selection (genetic algorithm) - Abstract
A used car is a car that has been used by other people. Choosing a used car according to the needs of the buyer is very much a consideration. Used car buyers, of course, make their choices based on several criteria. The criteria for choosing a used car include transmission, price, passenger capacity, luggage capacity, year of manufacture, color, and engine capacity. These criteria are the buyer's consideration in choosing a used car because it is not easy for those who do not understand the criteria for choosing a used car. A decision Support System is a solution in choosing a used car according to buyer needs. Decision Support Systems have many methods, and the methods used in this study are the VIKOR and MAUT methods. Problems can be solved by using a weighting ranking system. So that buyers can choose a used car based on recommendations from the Decision Support System. This research is expected to help prospective buyers in choosing a used car according to their desired needs.
- Published
- 2021
- Full Text
- View/download PDF
18. A New Weighted-learning Approach for Exploiting Data Sparsity in Tag-based Item Recommendation Systems
- Author
-
Noor Ifada and Richi Nayak
- Subjects
Scheme (programming language) ,021103 operations research ,General Computer Science ,Rank (linear algebra) ,Computer science ,Process (engineering) ,0211 other engineering and technologies ,General Engineering ,02 engineering and technology ,Construct (python library) ,A-weighting ,Recommender system ,computer.software_genre ,Weighting ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,020201 artificial intelligence & image processing ,Data mining ,computer ,computer.programming_language - Abstract
The tag-based recommendation systems that are built based on tensor models commonly suffer from the data sparsity problem. In recent years, various weighted-learning approaches have been proposed to tackle such a problem. The approaches can be categorized by how a weighting scheme is used for exploiting the data sparsity – like employing it to construct a weighted tensor used for weighing the tensor model during the learning process. In this paper, we propose a new weighted-learning approach for exploiting data sparsity in tag-based item recommendation system. We introduce a technique to represent the users’ tag preferences for leveraging the weighted-learning approach. The key idea of the proposed technique comes from the fact that users use different choices of tags to annotate the same item while the same tag may be used to annotate various items in tag-based systems. This points out that users’ tag usage likeliness is different and therefore their tag preferences are also different. We then present three novel weighting schemes that are varied in manners by how the ordinal weighting values are used for labelling the users’ tag preferences. As a result, three weighted tensors are generated based on each scheme. To implement the proposed schemes for generating item recommendations, we develop a novel weighted-learning method called as WRank (Weighted Rank). Our experiments show that considering the users' tag preferences in the tensor-based weightinglearning approach can solve the data sparsity problem as well as improve the quality of recommendation.
- Published
- 2021
- Full Text
- View/download PDF
19. Superior Position Estimation Based on Optimization in GNSS
- Author
-
Changhui Jiang, Shuai Chen, Yuwei Chen, Shen Jichun, Yuming Bo, and Di Liu
- Subjects
Epoch (reference date) ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Kalman filter ,A-weighting ,Tracking (particle physics) ,Least squares ,Computer Science Applications ,Transformation (function) ,GNSS applications ,Position (vector) ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Algorithm - Abstract
In the conventional GNSS receiver, pseudo-range and pseudo-range rates measurements are generated through carrier signal tracking and code tracking respectively. Then, Least Square method (LSM) or Kalman filter (KF) is utilized to estimate the position and velocity (PV) regarding the pseudo-range and pseudo-range rates as the measurements. However, the LSM ignores the fact that PV information is time-correlated. Smoother positioning results can be obtained considering time-correlated characteristic of the PV information. In KF, the PV information is estimated in a weighting manner between the prediction and measurements updated states, smoother positioning results are obtained since state transformation constraints are included. However, in KF, abundant historical information is dropped out and excluded while estimating the state at current epoch. In this letter, a Graph Optimization (GO) method based GNSS position estimation was proposed and implemented. State transformation and measurements were all regarded as the constraints to optimize the states estimation in the GO method. Historical states and measurements were utilized to estimate state at current epoch in the GO framework. Superior position results were expected compared with that from the LSM and KF. In this study, a field was carried out, position results from LSM, KF and GO method were presented, compared and analyzed. With the iterative process and historical information included in the GO, the field test results demonstrated that GO method could generate better position results compared with that from the LSM and KF methods.
- Published
- 2021
- Full Text
- View/download PDF
20. Development of Tire-Wear Particle Emission Measurements for Passenger Vehicles
- Author
-
Sousuke Sasaki and Yoshio Tonegawa
- Subjects
Range (particle radiation) ,Materials science ,010504 meteorology & atmospheric sciences ,Health, Toxicology and Mutagenesis ,Exhaust gas ,Quadratic function ,A-weighting ,Mechanics ,010501 environmental sciences ,Management, Monitoring, Policy and Law ,01 natural sciences ,Pollution ,Acceleration ,Filter (large eddy simulation) ,Sampling (signal processing) ,Automotive Engineering ,Tread ,0105 earth and related environmental sciences - Abstract
In this study, we aimed to develop a new method for measuring tire-wear particles of less than 2.5 μm generated from vehicle use. We also aimed to devise a method for evaluating the emission factor of tire-wear particles. To develop an evaluation method for tire-wear particles, we examined several factors, such as how tire components in airborne particles collected on a sampling filter were measured, the comparison of tire-wear particles obtained in a laboratory study and an on-road study, a method for measuring tire-wear particles using a test vehicle, and a method for evaluating tire-wear mass using a weighting balance. Measurements of tire-wear particles were carried out using the measurement method proposed herein. The amount of tire wear that the particles generated was almost constant in a vehicle speed range of 20–40 km/h but was influenced by a change in lateral acceleration in the range of 0–0.4G. Furthermore, the relationship between the emission of tire-wear particles and the lateral acceleration force can be shown by a quadratic polynomial. We estimated the emission factor of tire-wear particles by applying the relational equation to the speed profile of the JC08 used in Japanese exhaust gas tests. The emission factor of the test tire used in this study was 3.7 mg/km-vehicle. The ratio of the tire-wear particles to tread wear mass was about 3.3% at PM2.5 and 3.7% at PM10.
- Published
- 2021
- Full Text
- View/download PDF
21. Blind separation of underdetermined Convolutive speech mixtures by time–frequency masking with the reduction of musical noise of separated signals
- Author
-
Midia Reshadi, Azam Rabiee, Mahbanou Zohrevandi, and Saeed Setayeshi
- Subjects
Underdetermined system ,Computer Networks and Communications ,Computer science ,business.industry ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,A-weighting ,Reduction (complexity) ,Noise ,Permutation ,Orthogonality ,Computer Science::Sound ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Artificial intelligence ,Focus (optics) ,business ,Cluster analysis ,Software - Abstract
The main focus of this paper is the separation of underdetermined convolutive blind speech in a multi-speaker environment. We present a method based on mask prediction in the time-frequency domain. Firstly, depending on the sparsity of signals in the time-frequency (TF) domain, we extimate speakers’ masks by clustering the relative absolute and Hermitian angle features extracted from the frequency components of the mixtures. Speech separation algorithms that are based on the sparsity and disjoint orthogonality of the speech signals in the time-frequency domain are not efficient when more than one source is active. Hence, in this paper, the cluster centers are estimated mostly based on the TF units that probably have only one active source. The correlations between the estimated masks, belonging to adjacent frequency bins, are leveraged to solve the permutation problem. To increase the accuracy, we have zeroed the value of masks at the TF unit without any active source. Moreover, in clustering, we employ a weighting function to consider the parts of masks that probably contains just one active source. Finally, in order to decrease the musical noise of the separated signals and improve their quality, sparse filters in the time-domain are utilized to re-estimate the separated signals. Performance of the proposed method is evaluated by a number of simulated and real speech signals. The simulated experiments were performed using a public dataset and Roomsim simulator. Compared the proposed method with some conventional algorithms, we observed that our separation method is more accurate than other approaches.
- Published
- 2021
- Full Text
- View/download PDF
22. Restricted calibration and weight trimming approaches for estimation of the population total in business statistics
- Author
-
Cenker Burak Metin, Yaprak Arzu Özdemir, and Sinem Tuğba Şahin Tekin
- Subjects
Statistics and Probability ,Estimation ,education.field_of_study ,Business statistics ,Economics, Business & Finance ,Calibration (statistics) ,Computer science ,Population ,Process (computing) ,A-weighting ,Weighting ,Statistics ,Trimming ,Statistics, Probability and Uncertainty ,education - Abstract
© 2021 Informa UK Limited, trading as Taylor & Francis Group.Some adjustments are made to design weights to reduce the negative effects of non-response and out-of-scope problems. The calibration approach is a weighting process that agrees with the known population values by using auxiliary information. In this study, alternative calibration approaches and weight trimming process that can be used in large data sets with extreme weights and different correlation structures were analysed. In addition, the effect of the correlation structure of auxiliary variables on the efficiency of the calibration estimators was investigated by a simulation study. The 2017 Annual Industry and Service Statistics data were used in the simulation study and it was seen that restricted calibration estimators were more efficient than the generalized regression estimator in estimating the variables with a high variance such as turnover. Especially in small sample fractions, we recommend the application of restricted calibration estimators, as they are more efficient than the weight trimming in solving the negative and less than one weights problem encountered after the calibration process.
- Published
- 2021
- Full Text
- View/download PDF
23. A geostatistical spatio-temporal model to non-fixed locations
- Author
-
Peter J. Diggle, Paulo Justiniano Ribeiro, W. H. Bonat, and V. F. Sehaber
- Subjects
Environmental Engineering ,Computer science ,Gaussian ,Inference ,Estimator ,Function (mathematics) ,A-weighting ,Kalman filter ,Covariance ,symbols.namesake ,Consistency (statistics) ,symbols ,Environmental Chemistry ,Safety, Risk, Reliability and Quality ,Algorithm ,General Environmental Science ,Water Science and Technology - Abstract
We investigated a Gaussian conditional geostatistical spatio-temporal model (CGSTM) aiming to fit data observed at non-fixed locations over discrete times, based only on the observed locations. The model specifies the process state at the current time conditioning on the process state in the recent past. Particularly, the process mean uses a weighting function governing the spatio-temporal model evolution and handling the interaction between space and time. The CGSTM provides attractive features, such as it belongs to the dynamic linear model framework, models non-fixed locations over time and easily provides forecasting maps k-steps ahead. Likelihood estimation and inference are based on a Kalman filter-based algorithm. Equivalent closed form of a covariance and precision matrices of the spatio-temporal joint-distribution was obtained. We performed a simulation study considering locations of a real data example, which presents data locations varying over time. A second simulation study was ran using various scenarios for parameter values and number of observations in time and space, observing consistency and unbiasedness of model estimators. Thirdly, The model was fitted to the average monthly rainfall dataset, with 678 temporal registers at 32 stations located in western Parana, Brazil. The rainfall station locations suffered geographical changes from 1961 to 2017. In this modelling, we used explanatory variables and provided forecasting maps.
- Published
- 2021
- Full Text
- View/download PDF
24. A Weighting Radius Prediction Iteration Optimization Algorithm Used in Photogrammetry for Rotary Body Structure of Port Hoisting Machinery
- Author
-
Zhangyan Zhao, Yang Liu, Chenghua Zhang, Enshun Lu, and Yifan Liu
- Subjects
General Computer Science ,Intersection (set theory) ,Computer science ,Port hoisting machinery ,General Engineering ,Rendezvous ,A-weighting ,Radius ,iteration ,TK1-9971 ,Set (abstract data type) ,Data acquisition ,Photogrammetry ,rotary body structure ,General Materials Science ,Point (geometry) ,weighting ,Electrical engineering. Electronics. Nuclear engineering ,Electrical and Electronic Engineering ,Algorithm ,radius - Abstract
As a non-contact measurement technology with high data acquisition efficiency, photogrammetry is an ideal choice for collecting the data needed in the safety evaluation of port hoisting machinery. However, the radius fitting result accuracy cannot meet the requirements of safety assessment due to the limitation of the port crane itself and the working environment characteristics, when the existing photogrammetry method is used to measure the rotary body structure represented by the portal crane slewing mechanism. In order to solve this problem, an iterative optimization algorithm for weighted radius prediction for the photogrammetry of the slewing mechanism of port hoisting machinery is proposed in this paper. First, the algorithm uses the generalized multi-line rendezvous model to transform the radius fitting problem into the multi-line intersection point prediction problem, which lays a theoretical basis for the subsequent algorithm implementation. Second, by introducing a weighting algorithm based on the camera optical distortion model, the algorithm optimizes the accuracy of radius fitting results. In addition, through the quantitative evaluation method of fitting accuracy based on weighted algorithm, the algorithm also establishes a set of iterative rules to balance the accuracy of measurement results and the execution efficiency of the algorithm. Finally, this paper designs theoretical verification tests and simulation engineering tests based on the characteristics of the algorithm and the engineering practice of port hoisting machinery photogrammetry. The experimental results demonstrate that the algorithm described in this paper can significantly improve the accuracy of radius fitting results when the data quantity is small and the data quality is poor compared with the traditional algorithm.
- Published
- 2021
25. DOA Estimation With Small Snapshots Using Weighted Mixed Norm Based on Spatial Filter
- Author
-
Li Xu, Shin-ya Matsushita, and Beiyi Liu
- Subjects
Spatial filter ,Computer Networks and Communications ,Aerospace Engineering ,A-weighting ,Matrix decomposition ,Noise ,Matrix (mathematics) ,Compressed sensing ,Norm (mathematics) ,Automotive Engineering ,Electrical and Electronic Engineering ,Algorithm ,Sparse matrix ,Mathematics - Abstract
$\ell _{2,1}$ -norm penalized compressive sensing (CS) is utilized to improve the performance of DOA estimation with small snapshots recently. However, the existing CS-based methods are not robust to the noise. In this article, we propose a CS-based DOA estimation using a novel weighted $\ell _{2,1}$ -norm penalty. A spatial filter which can roughly “clean up” or eliminate the signals coming from the directions of the true sources is constructed. Thus, the space spectrum of the spatial filter can work as a weighting matrix to adjust the sparse penalty automatically. A new weighted $\ell _{2,1}$ -norm penalty based on this spatial filter is then proposed for DOA estimation. Simulation results demonstrate the effectiveness and efficiency of the proposed algorithm.
- Published
- 2020
- Full Text
- View/download PDF
26. Deep combiner for independent and correlated pixel estimates
- Author
-
Toshiya Hachisuka, Jonghee Back, Binh-Son Hua, and Bochang Moon
- Subjects
Pixel ,Computer science ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,A-weighting ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Noise ,Kernel (image processing) ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Monte Carlo integration ,Algorithm - Abstract
Monte Carlo integration is an efficient method to solve a high-dimensional integral in light transport simulation, but it typically produces noisy images due to its stochastic nature. Many existing methods, such as image denoising and gradient-domain reconstruction, aim to mitigate this noise by introducing some form of correlation among pixels. While those existing methods reduce noise, they are known to still suffer from method-specific residual noise or systematic errors. We propose a unified framework that reduces such remaining errors. Our framework takes a pair of images, one with independent estimates, and the other with the corresponding correlated estimates. Correlated pixel estimates are generated by various existing methods such as denoising and gradient-domain rendering. Our framework then combines the two images via a novel combination kernel. We model our combination kernel as a weighting function with a deep neural network that exploits the correlation among pixel estimates. To improve the robustness of our framework for outliers, we additionally propose an extension to handle multiple image buffers. The results demonstrate that our unified framework can successfully reduce the error of existing methods while treating them as black-boxes.
- Published
- 2020
- Full Text
- View/download PDF
27. Distance-to-target weighting in LCA—A matter of perspective
- Author
-
Matthias Finkbeiner, Marco Muhl, and Markus Berger
- Subjects
policy targets ,Data collection ,Ecological Scarcity Method ,Computer science ,media_common.quotation_subject ,Supply chain ,distance to target ,A-weighting ,Environmental economics ,577 Ökologie ,LCIA ,Weighting ,Scarcity ,normalization ,Scale (social sciences) ,regionalization ,weighting ,ddc:577 ,Product (category theory) ,Life-cycle assessment ,General Environmental Science ,media_common - Abstract
Purpose Weighting can enable valuable support for decision-makers when interpreting life cycle assessment (LCA) results. Distance-to-target (DtT) weighting is based on the distance of policy (desired) targets to current environmental situations, and recent methodological DtT developments are based on a weighting perspective of a single region or country, considering mainly environmental situations in consuming countries or regions. However, as product supply chains are spread over many countries, this study aims at developing additional weighting approaches (producer regions and worst-case regions) and applying them in a theoretical case study on a global scale. Methods The current study is carried out to understand the influence of and the effect on weighting results of different countries and regions with their specific environmental policy targets. Based on the existing Ecological Scarcity Method (ESM), eco-factors for the three environmental issues climate change, acidification, and water resources were derived for as many countries as possible. The regional eco-factors were applied in a case study for steel and aluminum considering the three different weighting approaches on different regional scales. Results and discussion The analysis revealed significant differences in the obtained weighting results as well as strengths and limitations in the applicability of the examined perspectives. Acidification was showed to be highly important with between 80 and 92% of the aggregated weighting results among the perspectives where water-scarce countries were not involved. Water-scarce countries had a significant influence (75–95%) when they were part of the examined case study. Conclusions The developed approaches enable the assessment of global value chains in different producer regions as well as the utilization of the conservative worst-case-regions approach. The approaches can foster future decision-making in LCA contexts while providing country-specific results based on different weighting perspectives in national, regional, and global contexts. However, for a complete implementation of the presented approaches, further data gathering is needed on environmental situations and policy targets in different countries as well as regionalized life cycle data.
- Published
- 2020
- Full Text
- View/download PDF
28. Self-constrained inversion of potential fields through a 3D depth weighting
- Author
-
Maurizio Fedi, Andrea Vitale, Vitale, Andrea, and Fedi, Maurizio
- Subjects
Geophysics ,010504 meteorology & atmospheric sciences ,Geochemistry and Petrology ,Mathematical analysis ,Inversion (meteorology) ,A-weighting ,010502 geochemistry & geophysics ,01 natural sciences ,Geology ,0105 earth and related environmental sciences ,Weighting - Abstract
A new method for inversion of potential fields is developed using a depth-weighting function specifically designed for fields related to complex source distributions. Such a weighting function is determined from an analysis of the field that precedes the inversion itself. The algorithm is self-consistent, meaning that the weighting used in the inversion is directly deduced from the scaling properties of the field. Hence, the algorithm is based on two steps: (1) estimation of the locally homogeneous degree of the field in a 3D domain of the harmonic region and (2) inversion of the data using a specific weighting function with a 3D variable exponent. A multiscale data set is first formed by upward continuation of the original data. Local homogeneity and a multihomogeneous model are then assumed, and a system built on the scaling function is solved at each point of the multiscale data set, yielding a multiscale set of local-homogeneity degrees of the field. Then, the estimated homogeneity degree is associated to the model weighting function in the source volume. Tests on synthetic data show that the generalization of the depth weighting to a 3D function and the proposed two-step algorithm has great potential to improve the quality of the solution. The gravity field of a polyhedron is inverted yielding a realistic reconstruction of the whole body, including the bottom surface. The inversion of the aeromagnetic real data set, from the Mt. Vulture area, also yields a good and geologically consistent reconstruction of the complex source distribution.
- Published
- 2020
- Full Text
- View/download PDF
29. Homecare-Oriented Intelligent Long-Term Monitoring of Blood Pressure Using Electrocardiogram Signals
- Author
-
Fan Xu, Yang Zhao, Kwok-Leung Tsui, Xiaomao Fan, and Hailiang Wang
- Subjects
education.field_of_study ,Mean arterial pressure ,medicine.diagnostic_test ,Computer science ,020208 electrical & electronic engineering ,Real-time computing ,Population ,02 engineering and technology ,A-weighting ,Computer Science Applications ,Ecg monitoring ,Blood pressure ,Control and Systems Engineering ,Long term monitoring ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Electrical and Electronic Engineering ,education ,Electrocardiography ,Information Systems - Abstract
Long-term blood pressure (BP) monitoring is a widely used approach in a homecare intelligent system. However, BP is usually measured using cuff-based devices with tedious operation in practice, which may not be cost effective for continuous BP tracking. In this article, we propose a novel attention-based multitask network with a weighting scheme for BP estimation by analyzing and modeling single lead electrocardiogram (ECG) signals. Experimental results demonstrate that the proposed method could achieve mean error of systolic blood pressure, diastolic blood pressure, and mean arterial pressure estimation in levels of 0.18 $\pm$ 10.83, 1.24 $\pm$ 5.90, and 0.84 $\pm$ 6.47 mmHg, respectively. In comparison to other cutting-edge methods using ECG signals, the proposed method shows superior BP estimation performance. By integrating with a wearable/portable ECG monitoring device, the proposed model can be deployed to an embedded system or remote healthcare intelligent system to provide long-term BP monitoring service, which would help to reduce the incidence of malignant events happened in hypertensive population.
- Published
- 2020
- Full Text
- View/download PDF
30. Alternative estimation approaches for the factor augmented panel data model with small T
- Author
-
Philipp Hansen and Jörg Breitung
- Subjects
Statistics and Probability ,Normalization (statistics) ,Economics and Econometrics ,Asymptotic analysis ,05 social sciences ,Monte Carlo method ,Estimator ,A-weighting ,Mathematics (miscellaneous) ,0502 economics and business ,Principal component analysis ,Applied mathematics ,050207 economics ,Focus (optics) ,Social Sciences (miscellaneous) ,050205 econometrics ,Panel data ,Mathematics - Abstract
In this paper, we compare alternative estimation approaches for factor augmented panel data models. Our focus lies on panel data sets where the number of panel groups (N) is large relative to the number of time periods (T). The principal component (PC) and common correlated effects (CCE) estimators were originally developed for panel data with largeNandT, whereas the GMM approaches of Ahn et al. (J Econ 728 174:1–14, 2013) and Robertson and Sarafidis (J Econ 185(2):526–541, 2015) assume thatTis small (that isTis fixed in the asymptotic analysis). Our comparison of existing methods addresses three different issues. First, we analyze the possibility of an inappropriate normalization of the factor space (the so-called normalization failure). In particular we propose a variant of the CCE estimator that avoids the normalization failure by adapting a weighting scheme inspired by the analysis of Mundlak (Econometrica 46(1):69–85, 1978). Second, we analyze the effects of estimating versus fixing the number of factors in advance. Third, we demonstrate how the design of the Monte Carlo simulations favors some estimators, which explains the conflicting findings from existing Monte Carlo experiments.
- Published
- 2020
- Full Text
- View/download PDF
31. An $${\mathscr {H}}_{\infty }$$ Approach to Data-Driven Offset-Free Tracking
- Author
-
B. Esmaeili and M. Salim
- Subjects
0209 industrial biotechnology ,Exact model ,Offset (computer science) ,Computer science ,Attenuation ,020208 electrical & electronic engineering ,Energy Engineering and Power Technology ,02 engineering and technology ,A-weighting ,Computer Science Applications ,Data-driven ,020901 industrial engineering & automation ,Control and Systems Engineering ,Control theory ,Integrator ,Control system ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Subspace topology - Abstract
Data-driven controllers also called model-free controllers were invented in order to omit plant modeling step of model-based controllers. Design procedure of these controllers is directly based on experimental I/O data collected from real plant. It can ensure their reliability in real world applications, where the exact model is not available in most cases. In this paper, we consider the problem of accurate tracking performance in presence of external disturbances using data-driven methodologies combined with $$\mathscr {H}_\infty $$ approach. Defining the improved subspace-based predictor, as the base step of the proposed controller’s design procedure, an integrator is applied to the control loop, which increases the accuracy of controller’s reference tracking performance. Moreover, a weighting function is considered for disturbance attenuation. Simulation results evidently illustrate efficiency and satisfactory performance of the proposed controller.
- Published
- 2020
- Full Text
- View/download PDF
32. Analysis of demand–supply gaps in public transit systems based on census and GTFS data: a case study of Calgary, Canada
- Author
-
Lina Kattan, Merkebe Getachew Demissie, Santi Phithakkitnukoon, Carlo Ratti, and Koragot Kaeoruean
- Subjects
Normalization (statistics) ,Bridging (networking) ,Computer science ,business.industry ,Mechanical Engineering ,Business system planning ,Transportation ,A-weighting ,Management Science and Operations Research ,Census ,Supply and demand ,Transport engineering ,Software deployment ,Public transport ,business ,Information Systems - Abstract
Bridging the gap between demand and supply in transit service is crucial for public transportation management, as planning actions can be implemented to generate supply in high demand areas or to improve upon inefficient deployment of transit service in low transit demand areas. This study aims to introduce feasible approaches for measuring gap types 1 and 2. Gap type 1 measures the gap between public transit capacity and the number of public transit riders per area, while gap type 2 measures the gap between demand and supply as a normalized index. Gap type 1 provides a value that is more realistic than gap type 2, but it requires detailed passenger data that is not always readily available. Gap type 2 is a practical alternative when the detailed passenger data is unavailable because it uses a weighting scheme to estimate demand values. It also uses a newly proposed normalization method called M-score, which allows for a longitudinal gap analysis where yearly gap patterns and trends can be observed and compared. A 5-year gap analysis of Calgary transit is used as a case study. This work presents a new perspective of hourly gaps and proposes a gap measurement approach that contributes to public transit system planning and service improvement.
- Published
- 2020
- Full Text
- View/download PDF
33. Incentivised Travel and Mobile Application as Multiple Policy Intervention for Mode Shift
- Author
-
K. S. Asitha and Hooi Ling Khoo
- Subjects
Operations research ,Computer science ,business.industry ,Process (engineering) ,media_common.quotation_subject ,0211 other engineering and technologies ,Mode (statistics) ,02 engineering and technology ,A-weighting ,Structural equation modeling ,Incentive ,Public transport ,021105 building & construction ,Conceptual model ,business ,Mode choice ,021101 geological & geomatics engineering ,Civil and Structural Engineering ,media_common - Abstract
Transport is one of the influential aspects of a person’s day-to-day life, as their activities cannot be fulfilled without moving. To expect a person to change their travel behaviour, from their routine or regular pattern, involves major decisions that are subject to a weighting process of gain and loss. While the decision to cancel a trip and choose an alternative travelling route is much easier to call, mode shift to public transportation is way more complex and requires encouraging factors to make it happen. The objective in this study is to investigate the impact of incentive programmes and mobile applications on mode shift. A conceptual structural equation model (SEM) is developed based on questionnaires to analyse the conceptual model by virtue of applying statistical methods. The model results have indicated that the simultaneous implementation of mixed mode strategies are more effective in encouraging travel mode choice decisions. These strategies are push-pull approach (PPA) and pull-information approach (PIA). The right formula between incentive programmes and push factors can encourage behavioural change in mode choice (PPA strategy), whereas the pull or soft policies with the right travel application can be vital in mode choice (PIA strategy). In terms of significance, the PIA is twice the significance of PPA.
- Published
- 2020
- Full Text
- View/download PDF
34. A Comparison of Normalized and Non-Normalized Multiplicative Subjective Importance Weighting in Quality of Life Measurement
- Author
-
Qiguang Li, Houchao Lyu, and Chang-ming Hsieh
- Subjects
Sociology and Political Science ,Multiplicative function ,General Social Sciences ,Life satisfaction ,Regression analysis ,Sample (statistics) ,A-weighting ,Weighting ,Arts and Humanities (miscellaneous) ,Quality of life ,Statistics ,Developmental and Educational Psychology ,Subjective well-being ,Mathematics - Abstract
In quality of life (QOL) studies, importance weighting generally refers to the incorporation of perceived importance as a weighting factor into measures of QOL. Although there are issues with multiplicative scores (multiplying satisfaction and importance scores), the use of multiplicative scores as a method of non-normalized importance weighting remains common. In addition, researchers have suggested assessing importance weighting by inspecting life domains individually (i.e., within-domain perspective). Analyzing survey data from a sample of 328 Chinese adults, we (1) compared the non-normalized importance weighting method (multiplicative scores) and the normalized linear importance weighting method and showed that they not only represented different concepts but also produced different empirical results for importance weighting, (2) provided empirical evidence demonstrating the problems of assessing importance weighting from a within-domain perspective, and (3) presented the alternative variables to be included in regression analysis to assess normalized liner importance weighting.
- Published
- 2020
- Full Text
- View/download PDF
35. Artificial Software Complex 'Artificial Head'. Part 1 Adjusting the Frequency Response of the Path
- Author
-
Olexandr Oleksandrovych Dvornyk, Arkadii Mykolaiovych Prodeus, Daria Motorniuk, and Marina Vitaliivna Didkovska
- Subjects
Frequency response ,Frequency band ,Microphone ,Acoustics ,A-weighting ,Impulse (physics) ,Hann function ,Impulse response ,Weighting ,Mathematics - Abstract
In this paper, the technology of correction of the frequency response of the measuring path of the hardware-software complex "Artificial head", intended for acoustic examination of the rooms, has been developed. This correction is necessary because the amplitude-frequency response of the loudspeaker and the microphone are not perfectly uniform in the frequency band of analysis. Therefore, instead of the room impulse response, the convolution of the room impulse response with the impulse responses of the loudspeaker (test signal source) and the microphone will actually be evaluated. It is shown that such correction can be made by controlled dividing the frequency response of the loudspeaker-room-microphone system by the previously obtained estimate of the amplitude-frequency response of the loudspeaker-microphone subsystem. The problem with such calculations is the division operation, because the amplitude-frequency response of the loudspeaker-microphone subsystem may contain small numerical values, which will overflow the discharge grid of the computing system and crash the computer. However, it is clear that if proper control over the amplitude-frequency characteristics of the loudspeaker-microphone subsystem is ensured, such a division can be practically implemented. Since the amplitude-frequency response of the subsystem "loudspeaker-microphone" takes the smallest values at the edges of the frequency range, and the variance of the evaluation of the mutual spectrum of the system "loudspeaker-room-microphone" is the largest on the right edge of the frequency range, it is advisable to apply the method of regularization to achieve necessary calculation accuracy. In this case, the role of the regularizing factor can be played by a spectral weighting window whose values are close to one at low and medium frequencies, and with the approaching to the right edge of the frequency range, the values of this window are close to zero. The role of the regularization parameter is thus the width of such a weighting window. The nature and value of such correction effect on the accuracy of the room impulse response estimate are analyzed in this paper. It is shown that the Hann (Hanning) window can be used as a regularizing factor, while the width value of this window was experimentally found, which provides satisfactory properties for evaluating the room impulse response. It has been also shown that the width of the Hann window, which is close to 80% of the total frequency range analyzed, is satisfactory for practical applications.
- Published
- 2020
- Full Text
- View/download PDF
36. Robust estimation for moment condition models with data missing not at random
- Author
-
Peisong Han, Shu Yang, and Wei Li
- Subjects
Statistics and Probability ,Applied Mathematics ,Estimator ,Inference ,A-weighting ,Conditional probability distribution ,Missing data ,Empirical likelihood ,Robustness (computer science) ,Covariate ,Statistics ,Statistics::Methodology ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
We consider estimation for parameters defined through moment conditions when data are missing not at random. The missingness mechanism cannot be determined from the data alone, and inference under missingness not at random may be sensitive to unverifiable assumptions about the missingness mechanism. To add protection against model misspecification, we posit multiple models for the response probability and propose a weighting estimator with calibrated weights. Assuming the conditional distribution of the outcome given covariates is correctly modeled, we show that if any one of the multiple models for the response probability is correctly specified, the proposed estimator is consistent for the true value. A simulation study confirms that our estimator has multiple robustness when the outcome data is missing not at random. The method is also applied to an application.
- Published
- 2020
- Full Text
- View/download PDF
37. Assessment of the watering needs of an interior vertical garden through the use of a weighting lysimeter
- Author
-
D. Bañón, J.M. Molina-Martínez, J. Ochoa, and S. Bañón
- Subjects
Hydrology ,Lysimeter ,Environmental science ,A-weighting ,Horticulture - Published
- 2020
- Full Text
- View/download PDF
38. Cross-Covariance Weight of GSTAR-SUR Model for Rainfall Forecasting in Agricultural Areas
- Author
-
Atiek Iriany, Ni Wayan Suryawardhani, Hartawati Hartawati, Agus Sulistyono, and Aniek Iriany
- Subjects
Normalization (statistics) ,Mean squared error ,Agriculture ,business.industry ,High variability ,Statistics ,General Medicine ,Cross-covariance ,A-weighting ,business ,Mathematics - Abstract
The use of location weights on the formation of the spatio-temporal model contributes to the accuracy of the model formed. The location weights that are often used include uniform location weight, inverse distance, and cross-correlation normalization. The weight of the location considers the proximity between locations. For data that has a high level of variability, the use of the location weights mentioned above is less relevant. This research was conducted with the aim of obtaining a weighting method that is more suitable for data with high variability. This research was conducted using secondary data derived from 10 daily rainfall data obtained from BMKG Karangploso. The data period used was January 2008 to December 2018. The points of the rain posts studied included the rain post of the Blimbing, Karangploso, Singosari, Dau, and Wagir regions. Based on the results of the research forecasting model obtained is the GSTAR ((1), 1,2,3,12,36) -SUR model. The cross-covariance model produces a better level of accuracy in terms of lower RMSE values and higher R2 values, especially for Karangploso, Dau, and Wagir areas.
- Published
- 2020
- Full Text
- View/download PDF
39. An improved urban cellular automata model by using the trend-adjusted neighborhood
- Author
-
Wei Chen, Yuyu Zhou, and Xuecao Li
- Subjects
010504 meteorology & atmospheric sciences ,Ecology ,Computer science ,Neighborhood ,Ecological Modeling ,Interval temporal logic ,0211 other engineering and technologies ,Temporal context ,Urban sprawl ,Logistic regression ,02 engineering and technology ,A-weighting ,01 natural sciences ,Cellular automaton ,Cellular automata (CA) model ,Long period ,lcsh:QH540-549.5 ,Econometrics ,Spatial representation ,lcsh:Ecology ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Background Cellular automata (CA)-based models have been extensively used in urban sprawl modeling. Presently, most studies focused on the improvement of spatial representation in the modeling, with limited efforts for considering the temporal context of urban sprawl. In this paper, we developed a Logistic-Trend-CA model by proposing a trend-adjusted neighborhood as a weighting factor using the information of historical urban sprawl and integrating this factor in the commonly used Logistic-CA model. We applied the developed model in the Beijing-Tianjin-Hebei region of China and analyzed the model performance to the start year, the suitability surface, and the neighborhood size. Results Our results indicate the proposed Logistic-Trend-CA model outperforms the traditional Logistic-CA model significantly, resulting in about 18% and 14% improvements in modeling urban sprawl at medium (1 km) and fine (30 m) resolutions, respectively. The proposed Logistic-Trend-CA model is more suitable for urban sprawl modeling over a long temporal interval than the traditional Logistic-CA model. In addition, this new model is not sensitive to the suitability surface calibrated from different periods and spaces, and its performance decreases with the increase of the neighborhood size. Conclusion The proposed model shows potential for modeling future urban sprawl spanning a long period at regional and global scales.
- Published
- 2020
- Full Text
- View/download PDF
40. A novel algorithm in a linear phased array system for side lobe and grating lobe level reduction with large element spacing
- Author
-
Jafar Khalilpour, Javad Ranjbar, and Poorya Karami
- Subjects
Beamforming ,Physics ,Beam diameter ,business.industry ,Phased array ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,02 engineering and technology ,A-weighting ,Grating ,Surfaces, Coatings and Films ,Weighting ,Adaptive filter ,Optics ,Hardware and Architecture ,Side lobe ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
Phased array antennas are generally used for the inherent flexibility to beamforming and null-steering electronically. In the phased arrays the side lobes level (SLL) level is main problem which causes waste of energy or saturation of the receiver in the case of presence of the strong spatial blockers. In this paper, a weighting method was first used to reduce the level of SLL. However, this method increased the beam width and reduced resolution, which is not suitable for track applications. In next step hoping to increase the resolution, the distance between the antennas increased. But in this way, grating lobes appeared in the final beam. In fact, the main idea of the article is to solve this problem. Two methods of randomization at the antenna position and coefficients level were examined for a large element spacing in the order of one wavelength which cause significantly reduction in the side lobe level and grating lobe level simultaneously, while the beam resolution has increased. It is shown that for a linear 11-element phased array system with up to 180° scan angle for simultaneously beam-forming and null-steering a resolution of 10° was obtained for the amount of difference between desired beam and null points. The proposed system reduces the grating lobes to below − 13 dB, this technique leads to the realization of arrays with more number of elements, in which the weighting functions and mode excitation of the elements can be controlled by an adaptive signal processing unit in scanning phased array systems.
- Published
- 2020
- Full Text
- View/download PDF
41. Economic Performance Optimization by Set Point and Weighting Parameter Tuning based on LQG Controller Design
- Author
-
Sayuri Okayama and Shiro Masuda
- Subjects
0209 industrial biotechnology ,Optimization problem ,Computer science ,02 engineering and technology ,Variance (accounting) ,A-weighting ,Linear-quadratic-Gaussian control ,Weighting ,020901 industrial engineering & automation ,020401 chemical engineering ,Control theory ,Process control ,Variance reduction ,0204 chemical engineering ,Electrical and Electronic Engineering ,Performance improvement - Abstract
In the industrial process control, the variance reduction of process input and output contributes to economic performance improvement significantly. Several researches have formulated economic performance optimization problems taking account of a trade-off relation between process input and output variance. In these researches, the set points and a weighting parameter are optimized subject to the constraints of the upper and lower limits of process input and output signals. However, the previous works made no use of the analytical trade-off relation because it is too complex to be used as the constraint conditions. The present work introduces theoretical formulations that relate the weighting parameter with each input and output variance on the condition that the Linear Quadratic Gaussian (LQG) controllers are implemented as lower layer controllers. The proposed approach is applied to a two-input, two-output separation process model, and solves optimal weighting parameter based on LQG control. The numerical example also shows that the obtained optimal weighting parameter is effective for a MPC based learning algorithm.
- Published
- 2020
- Full Text
- View/download PDF
42. A Weighting Function-Based Method for Resistivity Inversion in Subsurface Investigations
- Author
-
Zhenhao Xu, Junyang Shao, Chengkun Wang, Yin Xin, Ma Zhao, Wei Zhou, Lichao Nie, and Bin Liu
- Subjects
Environmental Engineering ,Resolution (electron density) ,0211 other engineering and technologies ,Mineralogy ,02 engineering and technology ,A-weighting ,Function (mathematics) ,010502 geochemistry & geophysics ,Geotechnical Engineering and Engineering Geology ,01 natural sciences ,Geophysics ,Resistivity inversion ,Geology ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
There is a high demand for high detection accuracy and resolution with respect to anomalous bodies due to the increased development of underground spaces. This study focused on the weighted inversion of observed data from individual array type electrical resistivity tomography (ERT), and developed an improved method of applying a data weighing function to the geoelectrical inversion procedure. In this method, the weighting factor as an observed data weighting term was introduced into the objective function. For individual arrays, the sensitivity decreases with increasing electrode interval. Therefore, the Jacobian matrices were computed for the observed data of individual arrays to determine the value of the weighting factor, and the weighting factor was calculated automatically during inversion. In this work, 2D combined inversion of ERT data from four-electrode Alfa-type arrays is examined. The effectiveness of the weighted inversion method was demonstrated using various synthetic and real data examples. The results indicated that the inversion method based on observed data weighted function could improve the contribution of observed data with depth information to the objective function. It has been proven that the combined weighted inversion method could be a feasible tool for improving the accuracies of positioning and resolution while imaging deep anomalous bodies in the subsurface.
- Published
- 2020
- Full Text
- View/download PDF
43. Accurate end systole detection in dicrotic notch-less arterial pressure waveforms
- Author
-
Christopher G. Pretty, Geoff Shaw, Rachel Smith, J. Geoffrey Chase, Thomas Desaive, and Joel Balmer
- Subjects
Cardiac function curve ,Aorta ,medicine.medical_specialty ,Diastole ,Hemodynamics ,030208 emergency & critical care medicine ,Health Informatics ,A-weighting ,Critical Care and Intensive Care Medicine ,03 medical and health sciences ,0302 clinical medicine ,Anesthesiology and Pain Medicine ,Blood pressure ,030202 anesthesiology ,medicine.artery ,Internal medicine ,medicine ,Aortic pressure ,Cardiology ,Systole ,Mathematics - Abstract
Identification of end systole is often necessary when studying events specific to systole or diastole, for example, models that estimate cardiac function and systolic time intervals like left ventricular ejection duration. In proximal arterial pressure waveforms, such as from the aorta, the dicrotic notch marks this transition from systole to diastole. However, distal arterial pressure measures are more common in a clinical setting, typically containing no dicrotic notch. This study defines a new end systole detection algorithm, for dicrotic notch-less arterial waveforms. The new algorithm utilises the beta distribution probability density function as a weighting function, which is adaptive based on previous heartbeats end systole locations. Its accuracy is compared with an existing end systole estimation method, on dicrotic notch-less distal pressure waveforms. Because there are no dicrotic notches defining end systole, validating which method performed better is more difficult. Thus, a validation method is developed using dicrotic notch locations from simultaneously measured aortic pressure, forward projected by pulse transit time (PTT) to the more distal pressure signal. Systolic durations, estimated by each of the end systole estimates, are then compared to the validation systolic duration provided by the PTT based end systole point. Data comes from ten pigs, across two protocols testing the algorithms under different hemodynamic states. The resulting mean difference ± limits of agreement between measured and estimated systolic duration, of [Formula: see text] versus [Formula: see text], for the new and existing algorithms respectively, indicate the new algorithms superiority.
- Published
- 2020
- Full Text
- View/download PDF
44. Accurate Indoor Sound Level Measurement on a Low-Power and Low-Cost Wireless Sensor Node
- Author
-
Vladimir Risojević, Robert Rozman, Ratko Pilipović, Rok Češnovar, and Patricio Bulić
- Subjects
environmental noise monitoring ,noise sensing ,A-weighting ,hardware platform ,wireless sensor network ,Chemical technology ,TP1-1185 - Abstract
Wireless sensor networks can provide a cheap and flexible infrastructure to support the measurement of noise pollution. However, the processing of the gathered data is challenging to implement on resource-constrained nodes, because each node has its own limited power supply, low-performance and low-power micro-controller unit and other limited processing resources, as well as limited amount of memory. We propose a sensor node for monitoring of indoor ambient noise. The sensor node is based on a hardware platform with limited computational resources and utilizes several simplifications to approximate more complex and costly signal processing stage. Furthermore, to reduce the communication between the sensor node and a sink node, as well as the power consumed by the IEEE 802.15.4 (ZigBee) transceiver, we perform digital A-weighting filtering and non-calibrated calculation of the sound pressure level on the node. According to experimental results, the proposed sound level meter can accurately measure the noise levels of up to 100 dB, with the mean difference of less than 2 dB compared to Class 1 sound level meter. The proposed device can continuously monitor indoor noise for several days. Despite the limitations of the used hardware platform, the presented node is a promising low-cost and low-power solution for indoor ambient noise monitoring.
- Published
- 2018
- Full Text
- View/download PDF
45. Weighted aggregation systems and an expectation level-based weighting and scoring procedure
- Author
-
Tamás Jónás and József Dombi
- Subjects
Information Systems and Management ,General Computer Science ,Function (mathematics) ,A-weighting ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Weighting ,Transformation (function) ,Operator (computer programming) ,Modeling and Simulation ,01.02. Számítás- és információtudomány ,Decision model ,Algorithm ,Extended real number line ,Mathematics ,Unit interval - Abstract
This paper presents a novel approach to the weighted aggregation and to determination of weights in an aggregation procedure. In our study, we introduce the concept of a weighted aggregation system that consists of two components: (1) a weighting transformation and (2) an aggregation operator, both induced by a common generator function. We provide the necessary and sufficient condition for the form of a generator function-based weighted aggregation system. We show that the weighted quasi-arithmetic means on the non-negative extended real line are none other than the aggregation functions induced by weighted aggregation systems, i.e., these means are compositions of an n -ary aggregation operator and n weighting transformations ( n ∈ N , n ≥ 1 ). Next, using weighted quasi-arithmetic means on the unit interval, we introduce a new, expectation level-based weight determination method and a scoring procedure. In this method, the decision-maker’s expectation levels for the input variables are directly transformed into weights by making use of the generator function of a weighted quasi-arithmetic mean. We utilize this mean as a scoring function to evaluate the decision alternatives. Lastly, by the means of illustrative numerical examples, we present a novel decision model, in which the expectation levels can be even intervals, i.e., the weights are also intervals. Finally, we get an interval-valued score for each alternative.
- Published
- 2022
46. Weighting schemes and incomplete data: A generalized Bayesian framework for chance-corrected interrater agreement
- Author
-
Rutger van Oest and Jeffrey M. Girard
- Subjects
Observer Variation ,Bayesian probability ,Reproducibility of Results ,Bayes Theorem ,A-weighting ,Missing data ,Interpretation (model theory) ,Weighting ,Inter-rater reliability ,Statistics ,Humans ,Psychology (miscellaneous) ,Row ,Kappa ,Mathematics - Abstract
Van Oest (2019) developed a framework to assess interrater agreement for nominal categories and complete data. We generalize this framework to all four situations of nominal or ordinal categories and complete or incomplete data. The mathematical solution yields a chance-corrected agreement coefficient that accommodates any weighting scheme for penalizing rater disagreements and any number of raters and categories. By incorporating Bayesian estimates of the category proportions, the generalized coefficient also captures situations in which raters classify only subsets of items; that is, incomplete data. Furthermore, this coefficient encompasses existing chance-corrected agreement coefficients: the S-coefficient, Scott's pi, Fleiss' kappa, and Van Oest's uniform prior coefficient, all augmented with a weighting scheme and the option of incomplete data. We use simulation to compare these nested coefficients. The uniform prior coefficient tends to perform best, in particular, if one category has a much larger proportion than others. The gap with Scott's pi and Fleiss' kappa widens if the weighting scheme becomes more lenient to small disagreements and often if more item classifications are missing; missingness biases play a moderating role. The uniform prior coefficient often performs much better than the S-coefficient, but the S-coefficient sometimes performs best for small samples, missing data, and lenient weighting schemes. The generalized framework implies a new interpretation of chance-corrected weighted agreement coefficients: These coefficients estimate the probability that both raters in a pair assign an item to its correct category without guessing. Whereas Van Oest showed this interpretation for unweighted agreement, we generalize to weighted agreement. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
- Published
- 2021
- Full Text
- View/download PDF
47. Study and Assessment of Low Frequency Noise in Occupational Settings.
- Author
-
Shehap, Adel M., Shawky, Hany A., and El-Basheer, Tarek M.
- Subjects
- *
NOISE , *CITIES & towns , *GAS turbines , *FREQUENCY spectra , *SPECTRUM analysis - Abstract
Low frequency noise is one of the most harmful factors occurring in human working and living environment. Low frequency noise components from 20 to 250 Hz are often the cause of employee complaints. Noise from power stations is an actual problem for large cities, including Cairo. The noise from equipments of station could be a serious problem for station and for environmental area. The development of power stations in Cairo leads to appearing a wide range of gas turbines which are strong source of noise. Two measurement techniques using C-weighted along side the A-weighted scale are explored. C-weighting is far more sensitive to detect low frequency sound. Spectrum analysis in the low frequency range is done in order to identify a significant tonal component. Field studies were supported by a questionnaire to determine whether sociological or other factors might influence the results by using annoyance rating mean value. Subjects included in the study were 153 (mean = 36.86, SD = 8.49) male employees at the three electrical power stations. The (C-A) level difference is an appropriate metric for indicating a potential low frequency noise problem. A-weighting characteristics seem to be able to predict quite accurately annoyance experienced from LFN at workplaces. The aim of the present study is to find simple and reliable method for assessing low frequency noise in occupational environment to prevent its effects on work performance for the workers. The proposed method has to be compared with European methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. Development and validation of a new adaptive weighting for auditory risk assessment of complex noise.
- Author
-
Sun, Pengfei, Qin, Jun, and Qiu, Wei
- Subjects
- *
RISK assessment , *DEAFNESS , *NOISE measurement , *PARAMETER estimation , *KURTOSIS - Abstract
Noise-induced hearing loss (NIHL) still remains as a serious occupational related health problem worldwide. A-weighted equivalent sound pressure level (SPL) L Aeq has been widely used to assess the auditory risk of occupational noises in noise measurements standards. In addition, C-weighting is also used in the standards for detection of peak SPL of noise. However, both A-weighting and C-weighting have limitations on evaluation of high-level complex noise, which is often experienced in many military and industrial fields. In this study, we proposed a new adaptive weighting (F-weighting) for more accurate evaluation of complex noises. F-weighting is based on the blending of A-weighting and C-weighting through the weighting coefficients α A , T and α C , T . To determine α A , T and α C , T , two parameters, kurtosis ( K T ) and oscillation coefficient ( O T ) were introduced. Complex noise exposures in animal studies and noise signals measured in a mining facility were applied to validate the performance of F-weighting. The results show that F-weighting performs better than both A-weighting and C-weighting on the assessment of high-level complex noise. In addition, F-weighting based L Feq shows higher correlation with the hearing loss of the animal experimental data compared with A-weighted L Aeq , C-weighted L Ceq , and Non-weighted L eq . The proposed F-weighting could be a potential alternative weighting for the assessment of high-level complex noise in military and industrial applications. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. Development of Overhead Transmission Line Assessment Index
- Author
-
Rahman Azis Prasojo, Rizally Priatmadja, Rofiul Huda, and Suwarno
- Subjects
Electric power system ,Power transmission ,Electric power transmission ,Transmission (telecommunications) ,Transmission line ,Computer science ,A-weighting ,Line (text file) ,Weighting ,Reliability engineering - Abstract
The transmission line is an essential aspect of the electric power system as a medium to transmit electrical energy from the power center to the distribution system. The existing transmission line assessment index is not optimal because it does not consider the measurement values carried out in the inspection method. Therefore it needs a suitable way to reflect the reality of its condition. AHP was developed based on expert opinion in several previous studies to establish parameter prioritization. Multi-expert can help the system evolve. To obtain the consensus matrix, a row geometric mean approach was applied. The consensus methodology for numerous experts in power transmission line assessment index weighting factor calculation using AHP was employed in this study. As a result, the main component factor is the most critical factor for transmission line condition with a weighting value is 0.429, followed by Environment Assessment and Supporting Component Factor. Transmission assessment index calculations were done to 200 samples of transmission line towers. The results show that 71% or 152 transmission line towers are in good condition, while 29% or 48 others are in caution. The correlation between parameters was observed. It can be concluded that the higher the impact of environmental assessment, the lower the assessment index value.
- Published
- 2021
- Full Text
- View/download PDF
50. An improved data-free surrogate model for solving partial differential equations using deep neural networks
- Author
-
Jie Liu, Rongliang Chen, Qian Wan, Rui Xu, and Xinhai Chen
- Subjects
Multidisciplinary ,Partial differential equation ,Artificial neural network ,Computer science ,Science ,Computational science ,Structure (category theory) ,Observable ,A-weighting ,Applied mathematics ,Article ,symbols.namesake ,Surrogate model ,Mesh generation ,Helmholtz free energy ,symbols ,Medicine - Abstract
Partial differential equations (PDEs) are ubiquitous in natural science and engineering problems. Traditional discrete methods for solving PDEs are usually time-consuming and labor-intensive due to the need for tedious mesh generation and numerical iterations. Recently, deep neural networks have shown new promise in cost-effective surrogate modeling because of their universal function approximation abilities. In this paper, we borrow the idea from physics-informed neural networks (PINNs) and propose an improved data-free surrogate model, DFS-Net. Specifically, we devise an attention-based neural structure containing a weighting mechanism to alleviate the problem of unstable or inaccurate predictions by PINNs. The proposed DFS-Net takes expanded spatial and temporal coordinates as the input and directly outputs the observables (quantities of interest). It approximates the PDE solution by minimizing the weighted residuals of the governing equations and data-fit terms, where no simulation or measured data are needed. The experimental results demonstrate that DFS-Net offers a good trade-off between accuracy and efficiency. It outperforms the widely used surrogate models in terms of prediction performance on different numerical benchmarks, including the Helmholtz, Klein–Gordon, and Navier–Stokes equations.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.