1,314 results
Search Results
2. Engaging A Consultancy Firm To Undertake Baseline Energy Audit And Benchmarking Study For Non Pat Industries In Pulp And Paper Sector
- Subjects
Paper industry -- Accounting and auditing ,Energy conservation -- India ,Benchmarks ,Pulp industry -- Accounting and auditing ,Energy efficiency ,Alternative energy sources ,Consulting services -- Accounting and auditing ,Energy management systems ,Benchmark ,Business, international - Abstract
TENDER NOTICE FOR Engaging a consultancy firm to undertake Baseline Energy Audit and Benchmarking Study for Non PAT industries in Pulp and Paper sector. The Federal Republic of Germany and [...]
- Published
- 2021
3. Image Matching Across Wide Baselines: From Paper to Practice
- Author
-
Yuhe Jin, Kwang Moo Yi, Pascal Fua, Eduard Trulls, Jiri Matas, Dmytro Mishkin, and Anastasiia Mishchuk
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,computer.software_genre ,benchmark ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,dataset ,Structure from motion ,local features ,3d reconstruction ,structure from motion ,stereo ,Benchmarking ,Pipeline (software) ,Pattern recognition (psychology) ,Metric (mathematics) ,Benchmark (computing) ,Embedding ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Data mining ,Heuristics ,computer ,performance ,Software - Abstract
We introduce a comprehensive benchmark for local features and robust estimation algorithms, focusing on the downstream task -- the accuracy of the reconstructed camera pose -- as our primary metric. Our pipeline's modular structure allows easy integration, configuration, and combination of different methods and heuristics. This is demonstrated by embedding dozens of popular algorithms and evaluating them, from seminal works to the cutting edge of machine learning research. We show that with proper settings, classical solutions may still outperform the perceived state of the art. Besides establishing the actual state of the art, the conducted experiments reveal unexpected properties of Structure from Motion (SfM) pipelines that can help improve their performance, for both algorithmic and learned methods. Data and code are online https://github.com/vcg-uvic/image-matching-benchmark, providing an easy-to-use and flexible framework for the benchmarking of local features and robust estimation methods, both alongside and against top-performing methods. This work provides a basis for the Image Matching Challenge https://vision.uvic.ca/image-matching-challenge., Comment: Added: KeyNet-SOSNet, AffNet-HardNet, TFeat, MKD from kornia
- Published
- 2020
4. Supplementary Materials for the paper 'a Review of R Neural Network Packages (with NNbenchmark): Accuracy and Ease of Use'
- Author
-
Mahdi, Salsabila, Verma, Akshaj, Dutang, Christophe, Kiener, Patrice, and Nash, John C.
- Subjects
ComputingMethodologies_PATTERNRECOGNITION ,benchmark ,ComputingMilieux_COMPUTERSANDEDUCATION ,R software ,Neural network - Abstract
Supplementary materials for a forthcoming paper entitled 'a Review of R Neural Network Packages (with NNbenchmark): Accuracy and Ease of Use'
- Published
- 2020
- Full Text
- View/download PDF
5. Parton distributions and lattice QCD calculations: A community white paper
- Author
-
Giuseppe Bozzi, K. F. Liu, Christopher Monahan, C.-P. Yuan, Alberto Accardi, Robert S. Thorne, Hartmut Wittig, Constantia Alexandrou, Jian-Wei Qiu, Emanuele R. Nocera, Gerrit Schierholz, Alessandro Bacchetta, Luigi Del Debbio, Sara Collins, Kostas Orginos, Fred Olness, Rajan Gupta, Tomomi Ishikawa, Amanda Cooper-Sarkar, Jeremy Green, James Zanotti, Jiunn-Wei Chen, Simonetta Liuti, Juan Rojo, Martha Constantinou, Michael Engelhardt, Huey-Wen Lin, Pavel Nadolsky, Aleksander Kusina, Werner Vogelsang, Lucian Harland-Lang, Ingo Schienbein, (Astro)-Particles Physics, Laboratoire de Physique Subatomique et de Cosmologie (LPSC), Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), Laboratoire de Physique Subatomique et de Cosmologie ( LPSC ), and Université Joseph Fourier - Grenoble 1 ( UJF ) -Institut polytechnique de Grenoble - Grenoble Institute of Technology ( Grenoble INP ) -Institut National de Physique Nucléaire et de Physique des Particules du CNRS ( IN2P3 ) -Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique ( CNRS ) -Université Grenoble Alpes ( UGA )
- Subjects
Quark ,Nuclear and High Energy Physics ,Particle physics ,quark: distribution function ,data analysis method ,High Energy Physics::Lattice ,Lattice field theory ,hadron: spin ,FOS: Physical sciences ,parton: distribution function ,Parton ,Lattice QCD ,01 natural sciences ,hard scattering ,High Energy Physics - Lattice ,High Energy Physics - Phenomenology (hep-ph) ,benchmark ,Factorization ,0103 physical sciences ,quantum chromodynamics ,quantum chromodynamics: factorization ,ddc:530 ,010306 general physics ,Global QCD fits ,Quantum chromodynamics ,Physics ,polarization ,gluon: distribution function ,010308 nuclear & particles physics ,[PHYS.HLAT]Physics [physics]/High Energy Physics - Lattice [hep-lat] ,High Energy Physics - Lattice (hep-lat) ,High Energy Physics::Phenomenology ,lattice field theory ,[ PHYS.HLAT ] Physics [physics]/High Energy Physics - Lattice [hep-lat] ,Observable ,Gluon ,High Energy Physics - Phenomenology ,[PHYS.HPHE]Physics [physics]/High Energy Physics - Phenomenology [hep-ph] ,[ PHYS.HPHE ] Physics [physics]/High Energy Physics - Phenomenology [hep-ph] ,High Energy Physics::Experiment ,Unpolarized/polarized parton distribution functions (PDFs) - Abstract
Progress in particle and nuclear physics 100, 107 - 160 (2018). doi:10.1016/j.ppnp.2018.01.007, In the framework of quantum chromodynamics (QCD), parton distribution functions (PDFs) quantify how the momentum and spin of a hadron are divided among its quark and gluon constituents. Two main approaches exist to determine PDFs. The first approach, based on QCD factorization theorems, realizes a QCD analysis of a suitable set of hard-scattering measurements, often using a variety of hadronic observables. The second approach, based on first-principle operator definitions of PDFs, uses lattice QCD to compute directly some PDF-related quantities, such as their moments. Motivated by recent progress in both approaches, in this document we present an overview of lattice-QCD and global-analysis techniques used to determine unpolarized and polarized proton PDFs and their moments. We provide benchmark numbers to validate present and future lattice-QCD calculations and we illustrate how they could be used to reduce the PDF uncertainties in current unpolarized and polarized global analyses. This document represents a first step towards establishing a common language between the two communities, to foster dialogue and to further improve our knowledge of PDFs., Published by Elsevier, Oxford [u.a.]
- Published
- 2018
- Full Text
- View/download PDF
6. Researchers from Scuola Normale Superiore Report Findings in Data Mining and Knowledge Discovery (Benchmarking and Survey of Explanation Methods for Black Box Models)
- Subjects
Benchmarks -- Methods -- Surveys -- Reports ,Machine learning -- Reports -- Methods -- Surveys ,Paper converting machinery -- Reports -- Methods -- Surveys ,Data mining -- Reports -- Methods -- Surveys ,Data warehousing/data mining ,Benchmark ,Computers - Abstract
2023 JUL 4 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Data detailed on Information Technology - Data Mining and Knowledge Discovery have been presented. [...]
- Published
- 2023
7. FSODv2: A Deep Calibrated Few-Shot Object Detection Network
- Author
-
Fan, Qi, Zhuo, Wei, Tang, Chi-Keung, and Tai, Yu-Wing
- Published
- 2024
- Full Text
- View/download PDF
8. THE STRATEGIES OF LAGOS STATE PUBLIC PROCUREMENT AGENCY FOR IMPLEMENTATION OF PUBLIC PROCUREMENT LAW.
- Author
-
Adelore, ADEWOYIN Adewunmi
- Subjects
GOVERNMENT purchasing ,PUBLIC law ,PUBLIC contracts ,AUDIT committees ,SCHOLARLY periodicals ,SAMPLING (Process) ,GOVERNMENT policy ,SCHOOL holding power - Abstract
This paper examined the strategies put in place by Lagos State Public Procurement Agency in the implementation of public procurement policy. The paper utilised primary and secondary sources of data. Primary data were collected solely through the administration of questionnaires on the respondents. The study population 1,020 comprised the staff members from ministries, and agency that monitors the implementation of procurement policy in the state, Lagos State Public Procurement Agency (LSPPA). Proportionate random sampling technique was used in selecting a sample size of 154 respondents representing 15% of the study population. Secondary data were obtained from books, academic journals, official document of LSPPA and the internet. Data collected were analysed using percentage, frequency, and (RII) Relative Importance Index. The results of the study showed that the strategies put in place by Lagos State Procurement Agency; such as establishment of a threshold (52.6%, rank 7th), constitution of contract performance audit committee (63%, rank 9.5th), supervision of the deliveries (69.5%, rank 9.5th), and creation of a procurement officer's desk (57.8%, rank 8th) were not adequately substantial strategies put in place in monitoring the implementation of public procurement policy in Lagos state. Moreover, the paper revealed strategies that were adequately substantial which were put in place by Lagos State Public Procurement Agency in the implementation of public procurement law, such as composition of technical review committee (79.9%, rank 1st), appraisal of procurement plan for procuring entities (73.3%, rank 2nd), setting of fair pricing standards and benchmarks (61%, rank 6th), strategic oversight on the contracts (65.6%, rank 5th), keeping of contractors' database for proper identification (65.6%, rank 4th), publication of details of major contracts in the State (69.5%, rank 3rd) significantly improved accountability, proficiency, efficacy among the ministries selected. The study therefore, conclude that the strategies put in place by LSPPA, Lagos State Public Procurement Agency in the implementation of public procurement law in the state is substantial, and apt which ensures that procurement law of the State is followed. It was further evident from six claims affirmation as against four claims that refuted it respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Popularity and performance of bioinformatics software: the case of gene set analysis.
- Author
-
Xie, Chengshu, Jauhari, Shaurya, and Mora, Antonio
- Subjects
BIOINFORMATICS software ,BIBLIOGRAPHIC databases ,GENES ,POPULARITY ,ONLINE databases ,DISCUSSION in education - Abstract
Background: Gene Set Analysis (GSA) is arguably the method of choice for the functional interpretation of omics results. The following paper explores the popularity and the performance of all the GSA methodologies and software published during the 20 years since its inception. "Popularity" is estimated according to each paper's citation counts, while "performance" is based on a comprehensive evaluation of the validation strategies used by papers in the field, as well as the consolidated results from the existing benchmark studies. Results: Regarding popularity, data is collected into an online open database ("GSARefDB") which allows browsing bibliographic and method-descriptive information from 503 GSA paper references; regarding performance, we introduce a repository of jupyter workflows and shiny apps for automated benchmarking of GSA methods ("GSA-BenchmarKING"). After comparing popularity versus performance, results show discrepancies between the most popular and the best performing GSA methods. Conclusions: The above-mentioned results call our attention towards the nature of the tool selection procedures followed by researchers and raise doubts regarding the quality of the functional interpretation of biological datasets in current biomedical studies. Suggestions for the future of the functional interpretation field are made, including strategies for education and discussion of GSA tools, better validation and benchmarking practices, reproducibility, and functional re-analysis of previously reported data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Towards Trusted Smart Contracts: A Comprehensive Test Suite For Vulnerability Detection
- Author
-
Arusoaie, Andrei and Susan, Ștefan-Claudiu
- Published
- 2024
- Full Text
- View/download PDF
11. Metaheuristic Optimization Methods in Energy Community Scheduling: A Benchmark Study.
- Author
-
Gomes, Eduardo, Pereira, Lucas, Esteves, Augusto, and Morais, Hugo
- Subjects
METAHEURISTIC algorithms ,PARTICLE swarm optimization ,DIFFERENTIAL evolution ,BIOLOGICAL evolution ,RENEWABLE energy sources - Abstract
The prospect of the energy transition is exciting and sure to benefit multiple aspects of daily life. However, various challenges, such as planning, business models, and energy access are still being tackled. Energy Communities have been gaining traction in the energy transition, as they promote increased integration of Renewable Energy Sources (RESs) and more active participation from the consumers. However, optimization becomes crucial to support decision making and the quality of service for the effective functioning of Energy Communities. Optimization in the context of Energy Communities has been explored in the literature, with increasing attention to metaheuristic approaches. This paper contributes to the ongoing body of work by presenting the results of a benchmark between three classical metaheuristic methods—Differential Evolution (DE), the Genetic Algorithm (GA), and Particle Swarm Optimization (PSO)—and three more recent approaches—the Mountain Gazelle Optimizer (MGO), the Dandelion Optimizer (DO), and the Hybrid Adaptive Differential Evolution with Decay Function (HyDE-DF). Our results show that newer methods, especially the Dandelion Optimizer (DO) and the Hybrid Adaptive Differential Evolution with Decay Function (HyDE-DF), tend to be more competitive in terms of minimizing the objective function. In particular, the Hybrid Adaptive Differential Evolution with Decay Function (HyDE-DF) demonstrated the capacity to obtain extremely competitive results, being on average 3% better than the second-best method while boasting between around 2× and 10× the speed of other methods. These insights become highly valuable in time-sensitive areas, where obtaining results in a shorter amount of time is crucial for maintaining system operational capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Predictions from Generative Artificial Intelligence Models: Towards a New Benchmark in Forecasting Practice.
- Author
-
Hassani, Hossein and Silva, Emmanuel Sirimal
- Subjects
GENERATIVE artificial intelligence ,PROBABILISTIC generative models ,PREDICTION theory ,STATISTICAL smoothing ,FORECASTING - Abstract
This paper aims to determine whether there is a case for promoting a new benchmark for forecasting practice via the innovative application of generative artificial intelligence (Gen-AI) for predicting the future. Today, forecasts can be generated via Gen-AI models without the need for an in-depth understanding of forecasting theory, practice, or coding. Therefore, using three datasets, we present a comparative analysis of forecasts from Gen-AI models against forecasts from seven univariate and automated models from the forecast package in R, covering both parametric and non-parametric forecasting techniques. In some cases, we find statistically significant evidence to conclude that forecasts from Gen-AI models can outperform forecasts from popular benchmarks like seasonal ARIMA, seasonal naïve, exponential smoothing, and Theta forecasts (to name a few). Our findings also indicate that the accuracy of forecasts from Gen-AI models can vary not only based on the underlying data structure but also on the quality of prompt engineering (thus highlighting the continued importance of forecasting education), with the forecast accuracy appearing to improve at longer horizons. Therefore, we find some evidence towards promoting forecasts from Gen-AI models as benchmarks in future forecasting practice. However, at present, users are cautioned against reliability issues and Gen-AI being a black box in some cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. WeatherBench 2: A Benchmark for the Next Generation of Data‐Driven Global Weather Models.
- Author
-
Rasp, Stephan, Hoyer, Stephan, Merose, Alexander, Langmore, Ian, Battaglia, Peter, Russell, Tyler, Sanchez‐Gonzalez, Alvaro, Yang, Vivian, Carver, Rob, Agrawal, Shreya, Chantry, Matthew, Ben Bouallegue, Zied, Dueben, Peter, Bromberg, Carla, Sisk, Jared, Barrington, Luke, Bell, Aaron, and Sha, Fei
- Subjects
NUMERICAL weather forecasting ,WEATHER forecasting ,WEATHER ,ARTIFICIAL intelligence ,WEATHERING - Abstract
WeatherBench 2 is an update to the global, medium‐range (1–14 days) weather forecasting benchmark proposed by (Rasp et al., 2020, https://doi.org/10.1029/2020ms002203), designed with the aim to accelerate progress in data‐driven weather modeling. WeatherBench 2 consists of an open‐source evaluation framework, publicly available training, ground truth and baseline data as well as a continuously updated website with the latest metrics and state‐of‐the‐art models: https://sites.research.google/weatherbench. This paper describes the design principles of the evaluation framework and presents results for current state‐of‐the‐art physical and data‐driven weather models. The metrics are based on established practices for evaluating weather forecasts at leading operational weather centers. We define a set of headline scores to provide an overview of model performance. In addition, we also discuss caveats in the current evaluation setup and challenges for the future of data‐driven weather forecasting. Plain Language Summary: Traditionally, weather forecasts are made by models that attempt to replicate the physical processes of the atmosphere. This has been very successful over the last few decades as better computers, better observations and model upgrades have lead to steadily improving weather forecasts. However, with rapid advances in artificial intelligence (AI), the question can be asked whether one can simply learn a weather model from past observations or reanalyzes. In the last couple of years, we have seen tremendous progress with state‐of‐the‐art AI models rivaling the best "traditional" weather models in skill. WeatherBench 2 is a benchmark data set designed to evaluate and compare the quality of AI and traditional models. By setting a standard for evaluation, alongside providing open‐source data and code, this project aims to accelerate this research direction and lead to better weather prediction. Key Points: WeatherBench 2 is a framework for evaluating and comparing data‐driven and traditional numerical weather forecasting modelsIt provides an evaluation framework, publicly available data sets and a website to assess the state‐of‐the‐art weather modelsThe evaluation protocol has been designed following best practices established in the operational weather forecasting community [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Jump-diffusion risk-sensitive benchmarked asset management with traditional and alternative data.
- Author
-
Davis, Mark and Lleo, Sébastien
- Subjects
ASSET management ,INVESTORS ,ASSET allocation ,RETURN on assets ,PRICES ,INVESTMENT policy - Abstract
This paper addresses errors in mean return estimates in continuous-time asset allocation models. A standard approach postulates that stochastic factors explain expected asset returns. The problem is then to estimate these factors from observed asset prices via filtering. Recent advances have also combined asset prices with expert opinions to improve the estimates. However, these methods have limitations: stocks prices favor momentum strategies, and expert opinions require careful debiasing. To resolve these issues, we propose a jump-diffusion risk-sensitive benchmarked asset management model in which investors estimate the factors from both traditional and alternative data. We show that this model admits a unique C 1 , 2 solution, and we derive the optimal investment policy in quasi-closed form. We find that investors construct their portfolios from a passive core and an active satellite. The passive core adds considerations for jump risk to a simple benchmark replication. The active satellite blends security selection, and factor tilts with event-driven strategies unique to jump-diffusion problems. Thus, our model explains the most popular investment strategies. Furthermore, the improved expert forecast model and the introduction of alternative data provide factor tilters with new tools to sharpen their asset allocation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A review of visual SLAM for robotics: evolution, properties, and future applications.
- Author
-
Al-Tawil, Basheer, Hempel, Thorsten, Abdelrahman, Ahmed, Al-Hamadi, Ayoub, Chakraborty, Chinmay, Bingi, Kishore, Elamvazuthi, Irraivan, and Cruz, Edmanuel
- Subjects
SLAM (Robotics) ,DEEP learning ,OBJECT tracking (Computer vision) ,HUMAN-robot interaction ,VISUAL odometry ,CONVOLUTIONAL neural networks ,ARTIFICIAL intelligence - Abstract
Visual simultaneous localization and mapping (V-SLAM) plays a crucial role in the field of robotic systems, especially for interactive and collaborative mobile robots. The growing reliance on robotics has increased complexity in task execution in real-world applications. Consequently, several types of VSLAM methods have been revealed to facilitate and streamline the functions of robots. This work aims to showcase the latest V-SLAM methodologies, offering clear selection criteria for researchers and developers to choose the right approach for their robotic applications. It chronologically presents the evolution of SLAM methods, highlighting key principles and providing comparative analyses between them. The paper focuses on the integration of the robotic ecosystem with a robot operating system (ROS) as Middleware, explores essential V-SLAM benchmark datasets, and presents demonstrative figures for each method's workflow. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Dynamic model validation and advanced polymer control for rotating belt filtration as primary treatment of domestic wastewaters
- Author
-
Christopher T. DeGroot, Chitta Ranjan Behera, Krist V. Gernaey, Riccardo Boiocchi, Gürkan Sin, Anthony Sherratt, and Domenico Santoro
- Subjects
General Chemical Engineering ,Rotating belt filters ,02 engineering and technology ,Wastewater ,Benchmark ,Industrial and Manufacturing Engineering ,law.invention ,020401 chemical engineering ,law ,0204 chemical engineering ,Effluent ,Filtration ,Total suspended solids ,Applied Mathematics ,Model validation ,General Chemistry ,Particulates ,021001 nanoscience & nanotechnology ,Pulp and paper industry ,6. Clean water ,Volumetric flow rate ,Activated sludge ,Environmental science ,Primary treatment ,Aeration ,0210 nano-technology ,Plant-wide assessment - Abstract
Rotating belt filters are a primary treatment technology currently used in municipal wastewater treatment plants as a high-rate alternative to primary clarifiers. Such systems are reported to efficiently remove particulates in the form of total suspended solids, achieving the highest removal efficiencies (>60%) when polymer is added as pre-treatment step. In this paper, a new dynamic model describing the RBF performance is presented. The model includes the dynamic effects of influent flow rate and TSS concentration and can also describe the effects of polymer dosing on RBF performance. The validated model was used to perform a plant-wide impact assessment using the Benchmark Simulation Model n.2. For a TSS removal efficiency that is identical to primary clarifiers (50%), rotating belt filters were found to only slightly decrease the aeration energy demand in the activated sludge unit (−0.5%), while increasing methane production and slightly increasing effluent TN concentrations (+1.5% and +1.9%, respectively). Furthermore, considerable savings in polymer costs could be attained using an advanced control strategy for polymer dosing, successfully tested in this work.
- Published
- 2020
17. Correspondence on NanoVar's performance outlined by Jiang T. et al. in "Long-read sequencing settings for efficient structural variation detection based on comprehensive evaluation".
- Author
-
Tham, Cheng Yong and Benoukraf, Touati
- Subjects
SCIENTIFIC community ,NUCLEOTIDE sequencing ,GENOTYPES - Abstract
A recent paper by Jiang et al. in BMC Bioinformatics presented guidelines on long-read sequencing settings for structural variation (SV) calling, and benchmarked the performance of various SV calling tools, including NanoVar. In their simulation-based benchmarking, NanoVar was shown to perform poorly compared to other tools, mostly due to low SV recall rates. To investigate the causes for NanoVar's poor performance, we regenerated the simulation datasets (3× to 20×) as specified by Jiang et al. and performed benchmarking for NanoVar and Sniffles. Our results did not reflect the findings described by Jiang et al. In our analysis, NanoVar displayed more than three times the F1 scores and recall rates as reported in Jiang et al. across all sequencing coverages, indicating a previous underestimation of its performance. We also observed that NanoVar outperformed Sniffles in calling SVs with genotype concordance by more than 0.13 in F1 scores, which is contrary to the trend reported by Jiang et al. Besides, we identified multiple detrimental errors encountered during the analysis which were not addressed by Jiang et al. We hope that this commentary clarifies NanoVar's validity as a long-read SV caller and provides assurance to its users and the scientific community. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. A novel two-phase trigonometric algorithm for solving global optimization problems
- Author
-
Baskar, A., Xavior, M. Anthony, Jeyapandiarajan, P., Batako, Andre, and Burduk, Anna
- Published
- 2024
- Full Text
- View/download PDF
19. Comparative analysis of circular RNA enrichment methods
- Author
-
Huajuan Shi, Ying Zhou, Erteng Jia, Zhiyu Liu, Min Pan, Yunfei Bai, Xiangwei Zhao, and Qinyu Ge
- Subjects
CircRNA sequencing ,Sequence Analysis, RNA ,Gene Expression Profiling ,RNA Stability ,Reproducibility of Results ,Genes, rRNA ,RNA, Circular ,Cell Biology ,Chemical Fractionation ,sensitivity ,Sensitivity and Specificity ,benchmark ,Humans ,precision ,enrichment methods ,Transcriptome ,Molecular Biology ,Research Article ,Research Paper ,Gene Library - Abstract
The circRNAs sequencing results vary due to the different enrichment methods and their performance is needed to systematic comparison. This study investigated the effects of different circRNA enrichment methods on sequencing results, including abundance and species of circRNAs, as well as the sensitivity and precision. This experiment was carried out by following four common circRNA enrichment methods: including ribosomal RNA depletion (rRNA–), polyadenylation and poly (A+) RNA depletion followed by RNase R treatment (polyA+RNase R), rRNA–+polyA+RNase R and polyA+RNase R+ rRNA–. The results showed that polyA+RNase R+ rRNA – enrichment method obtained more circRNA number, higher sensitivity and abundance among them; polyA+RNase R method obtained higher precision. The linear RNAs can be thoroughly removed in all enrichment methods except rRNA depletion method. Overall, our results helps researchers to quickly selection a circRNA enrichment of suitable for own study among many enrichment methods, and it provides a benchmark framework for future improvements circRNA enrichment methods.
- Published
- 2021
- Full Text
- View/download PDF
20. Diagnostic Accuracy of Web-Based COVID-19 Symptom Checkers: Comparison Study
- Author
-
Bernhard Knapp, Stefanie Gruarin, Nicolas Munsch, Alistair Martin, Rafael Weingartner-Ortner, Jama Nateqi, and Isselmou Abdarahmane
- Subjects
medicine.medical_specialty ,020205 medical informatics ,Coronavirus disease 2019 (COVID-19) ,digital health ,Health Informatics ,Diagnostic accuracy ,02 engineering and technology ,lcsh:Computer applications to medicine. Medical informatics ,03 medical and health sciences ,0302 clinical medicine ,benchmark ,symptom checkers ,Internal medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,030212 general & internal medicine ,Original Paper ,accuracy ,business.industry ,lcsh:Public aspects of medicine ,chatbot ,food and beverages ,COVID-19 ,lcsh:RA1-1270 ,Matthews correlation coefficient ,symptom ,Confidence interval ,3. Good health ,Sample size determination ,Predictive value of tests ,Comparison study ,lcsh:R858-859.7 ,F1 score ,business - Abstract
Background A large number of web-based COVID-19 symptom checkers and chatbots have been developed; however, anecdotal evidence suggests that their conclusions are highly variable. To our knowledge, no study has evaluated the accuracy of COVID-19 symptom checkers in a statistically rigorous manner. Objective The aim of this study is to evaluate and compare the diagnostic accuracies of web-based COVID-19 symptom checkers. Methods We identified 10 web-based COVID-19 symptom checkers, all of which were included in the study. We evaluated the COVID-19 symptom checkers by assessing 50 COVID-19 case reports alongside 410 non–COVID-19 control cases. A bootstrapping method was used to counter the unbalanced sample sizes and obtain confidence intervals (CIs). Results are reported as sensitivity, specificity, F1 score, and Matthews correlation coefficient (MCC). Results The classification task between COVID-19–positive and COVID-19–negative for “high risk” cases among the 460 test cases yielded (sorted by F1 score): Symptoma (F1=0.92, MCC=0.85), Infermedica (F1=0.80, MCC=0.61), US Centers for Disease Control and Prevention (CDC) (F1=0.71, MCC=0.30), Babylon (F1=0.70, MCC=0.29), Cleveland Clinic (F1=0.40, MCC=0.07), Providence (F1=0.40, MCC=0.05), Apple (F1=0.29, MCC=-0.10), Docyet (F1=0.27, MCC=0.29), Ada (F1=0.24, MCC=0.27) and Your.MD (F1=0.24, MCC=0.27). For “high risk” and “medium risk” combined the performance was: Symptoma (F1=0.91, MCC=0.83) Infermedica (F1=0.80, MCC=0.61), Cleveland Clinic (F1=0.76, MCC=0.47), Providence (F1=0.75, MCC=0.45), Your.MD (F1=0.72, MCC=0.33), CDC (F1=0.71, MCC=0.30), Babylon (F1=0.70, MCC=0.29), Apple (F1=0.70, MCC=0.25), Ada (F1=0.42, MCC=0.03), and Docyet (F1=0.27, MCC=0.29). Conclusions We found that the number of correctly assessed COVID-19 and control cases varies considerably between symptom checkers, with different symptom checkers showing different strengths with respect to sensitivity and specificity. A good balance between sensitivity and specificity was only achieved by two symptom checkers.
- Published
- 2020
21. FUSeg: The Foot Ulcer Segmentation Challenge.
- Author
-
Wang, Chuanbo, Mahbod, Amirreza, Ellinger, Isabella, Galdran, Adrian, Gopalakrishnan, Sandeep, Niezgoda, Jeffrey, and Yu, Zeyun
- Subjects
DEEP learning ,FOOT ulcers ,DIABETIC foot ,WOUND care ,CHRONIC wounds & injuries ,COMPUTER-assisted image analysis (Medicine) - Abstract
Wound care professionals provide proper diagnosis and treatment with heavy reliance on images and image documentation. Segmentation of wound boundaries in images is a key component of the care and diagnosis protocol since it is important to estimate the area of the wound and provide quantitative measurement for the treatment. Unfortunately, this process is very time-consuming and requires a high level of expertise, hence the need for automatic wound measurement methods. Recently, automatic wound segmentation methods based on deep learning have shown promising performance; yet, they heavily rely on large training datasets. A few wound image datasets were published including the Diabetic Foot Ulcer Challenge dataset, the Medetec wound dataset, and WoundDB. Existing public wound image datasets suffer from small size and a lack of annotation. There is a need to build a fully annotated dataset to benchmark wound segmentation methods. To address these issues, we propose the Foot Ulcer Segmentation Challenge (FUSeg), organized in conjunction with the 2021 International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). It contains 1210 pixel-wise annotated foot ulcer images collected over 2 years from 889 patients. The submitted algorithms are reviewed in this paper and the dataset can be accessed through the Foot Ulcer Segmentation Challenge website. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Remanufacturing a Synchronous Reluctance Machine with Aluminum Winding: An Open Benchmark Problem for FEM Analysis.
- Author
-
Katona, Mihály, Bányai, Dávid Gábor, Németh, Zoltán, Kuczmann, Miklós, and Orosz, Tamás
- Subjects
BENCHMARK problems (Computer science) ,REMANUFACTURING ,GREEN products ,ORIGINAL equipment manufacturers ,ELECTRIC machines - Abstract
The European Union's increasing focus on sustainable and eco-friendly product design has resulted in significant pressure on original equipment manufacturers to adopt more environmentally conscious practices. As a result, the remanufacturing of end-of-life electric machines is expected to become a promising industrial segment. Identifying the missing parameters of these types of machines will play an essential role in creating feasible and reliable redesigns and remanufacturing processes. A few case studies related to this problem have been published in the literature; however, some novel, openly accessible benchmark problems can facilitate the research and function as a basis for comparing and validating novel numerical methods. This paper presents the identification process of an experimental synchronous machine. It outlines methodologies for identifying material properties, winding schemes, and other critical parameters for the finite element analysis and modelling of electric machines with incomplete information. The machine in question is intended for remanufacturing, with the plan to replace its faulty winding with an aluminium-based alternative. It also serves as an open benchmark problem for researchers, designers, and practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A Particle Swarm and Smell Agent-Based Hybrid Algorithm for Enhanced Optimization.
- Author
-
Sulaiman, Abdullahi T., Bello-Salau, Habeeb, Onumanyi, Adeiza J., Mu'azu, Muhammed B., Adedokun, Emmanuel A., Salawudeen, Ahmed T., and Adekale, Abdulfatai D.
- Subjects
OPTIMIZATION algorithms ,NATURAL language processing ,SUPERVISED learning ,VEHICULAR ad hoc networks ,PARTICLE swarm optimization - Abstract
The particle swarm optimization (PSO) algorithm is widely used for optimization purposes across various domains, such as in precision agriculture, vehicular ad hoc networks, path planning, and for the assessment of mathematical test functions towards benchmarking different optimization algorithms. However, because of the inherent limitations in the velocity update mechanism of the algorithm, PSO often converges to suboptimal solutions. Thus, this paper aims to enhance the convergence rate and accuracy of the PSO algorithm by introducing a modified variant, which is based on a hybrid of the PSO and the smell agent optimization (SAO), termed the PSO-SAO algorithm. Our specific objective involves the incorporation of the trailing mode of the SAO algorithm into the PSO framework, with the goal of effectively regulating the velocity updates of the original PSO, thus improving its overall performance. By using the trailing mode, agents are continuously introduced to track molecules with higher concentrations, thus guiding the PSO's particles towards optimal fitness locations. We evaluated the performance of the PSO-SAO, PSO, and SAO algorithms using a set of 37 benchmark functions categorized into unimodal and non-separable (UN), multimodal and non-separable (MS), and unimodal and separable (US) classes. The PSO-SAO achieved better convergence towards global solutions, performing better than the original PSO in 76% of the assessed functions. Specifically, it achieved a faster convergence rate and achieved a maximum fitness value of −2.02180678324 when tested on the Adjiman test function at a hopping frequency of 9. Consequently, these results underscore the potential of PSO-SAO for solving engineering problems effectively, such as in vehicle routing, network design, and energy system optimization. These findings serve as an initial stride towards the formulation of a robust hyperparameter tuning strategy applicable to supervised machine learning and deep learning models, particularly in the domains of natural language processing and path-loss modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Benchmarking the Complementary-View Multi-human Association and Tracking.
- Author
-
Han, Ruize, Feng, Wei, Wang, Feifan, Qian, Zekun, Yan, Haomin, and Wang, Song
- Subjects
WEARABLE cameras ,VIDEO recording ,CAMERAS ,TRACKING radar - Abstract
Using multiple moving cameras with different and time-varying views can significantly expand the capability of multiple human tracking in larger areas and with various perspectives. In particular, the use of moving cameras of complementary top and horizontal views can facilitate multi-human detection and tracking from both global and local perspectives. As a new challenging problem that draws more and more attention in recent years, one main issue is the lack of a comprehensive dataset for credible performance evaluation. In this paper, we present such a new dataset consisting of videos synchronously recorded by drone and wearable cameras, with high-quality annotations of the covered subjects and their cross-frame and cross-view associations. We also propose a pertinent baseline algorithm for multi-view multiple human tracking and evaluate it on this new dataset against the annotated ground truths. Experimental results verify the usefulness of the new dataset and the effectiveness of the proposed baseline algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Run Your 3D Object Detector on NVIDIA Jetson Platforms:A Benchmark Analysis †.
- Author
-
Choe, Chungjae, Choe, Minjae, and Jung, Sungwook
- Subjects
DEEP learning ,OBJECT recognition (Computer vision) ,DETECTORS ,CENTRAL processing units ,REAL-time control ,POINT cloud ,NAVIGATION - Abstract
This paper presents a benchmark analysis of NVIDIA Jetson platforms when operating deep learning-based 3D object detection frameworks. Three-dimensional (3D) object detection could be highly beneficial for the autonomous navigation of robotic platforms, such as autonomous vehicles, robots, and drones. Since the function provides one-shot inference that extracts 3D positions with depth information and the heading direction of neighboring objects, robots can generate a reliable path to navigate without collision. To enable the smooth functioning of 3D object detection, several approaches have been developed to build detectors using deep learning for fast and accurate inference. In this paper, we investigate 3D object detectors and analyze their performance on the NVIDIA Jetson series that contain an onboard graphical processing unit (GPU) for deep learning computation. Since robotic platforms often require real-time control to avoid dynamic obstacles, onboard processing with a built-in computer is an emerging trend. The Jetson series satisfies such requirements with a compact board size and suitable computational performance for autonomous navigation. However, a proper benchmark that analyzes the Jetson for a computationally expensive task, such as point cloud processing, has not yet been extensively studied. In order to examine the Jetson series for such expensive tasks, we tested the performance of all commercially available boards (i.e., Nano, TX2, NX, and AGX) with state-of-the-art 3D object detectors. We also evaluated the effect of the TensorRT library to optimize a deep learning model for faster inference and lower resource utilization on the Jetson platforms. We present benchmark results in terms of three metrics, including detection accuracy, frame per second (FPS), and resource usage with power consumption. From the experiments, we observe that all Jetson boards, on average, consume over 80% of GPU resources. Moreover, TensorRT could remarkably increase inference speed (i.e., four times faster) and reduce the central processing unit (CPU) and memory consumption in half. By analyzing such metrics in detail, we establish research foundations on edge device-based 3D object detection for the efficient operation of various robotic applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Global optimization of mixed-integer nonlinear programs with SCIP 8
- Author
-
Bestuzheva, Ksenia, Chmiela, Antonia, Müller, Benjamin, Serrano, Felipe, Vigerske, Stefan, and Wegscheider, Fabian
- Published
- 2023
- Full Text
- View/download PDF
27. The Coulomb Hole of the Ne atom
- Author
-
Eloy Ramos-Cordoba, Eduard Matito, Miquel Solà, Xabier Lopez, Jesus M. Ugalde, and Mauricio Rodríguez-Mayorga
- Subjects
Extrapolation ,FOS: Physical sciences ,Neon ,Coulomb hole ,Electronic structure ,electron correlation ,010402 general chemistry ,01 natural sciences ,benchmark ,Core electron ,Physics - Chemical Physics ,Atom ,Coulomb ,Wave function ,core electrons ,Basis set ,Chemical Physics (physics.chem-ph) ,Physics ,Full Paper ,Electronic correlation ,010405 organic chemistry ,General Chemistry ,Full Papers ,electronic structure ,0104 chemical sciences ,Atomic physics - Abstract
We analyze the Coulomb hole of Ne from highly-accurate CISD wave functions obtained from optimized even-tempered basis sets. Using a two-fold extrapolation procedure we obtain highly accurate results that recover 97\% of the correlation energy. We confirm the existence of a shoulder in the short-range region of the Coulomb hole of the Ne atom, which is due to an internal reorganization of the $K$-shell caused by electron correlation of the core electrons. The feature is very sensitive to the quality of the basis set in the core region and it is not exclusive to Ne, being also present in most of second-row atoms, thus confirming that it is due to $K$-shell correlation effects., 7 pages, 6 figures, 1 table
- Published
- 2017
28. A Consistent One-Dimensional Multigroup Diffusion Model for Molten Salt Reactor Neutronics Calculations.
- Author
-
Elhareef, Mohamed, Wu, Zeyun, and Fratoni, Massimiliano
- Subjects
MOLTEN salt reactors ,MONTE Carlo method ,FAST reactors ,NEUTRON diffusion ,TRANSIENT analysis - Abstract
Molten Salt Reactors (MSRs) have recently gained resurged research and development interest in the advanced reactor community. Several computational tools are being developed to capture the strong neutronics/thermal-hydraulics coupling effect in this special reactor configuration. This paper presents a consistent one-dimensional (1D) multigroup neutron diffusion model for MSR analysis, with the primary aim for fast and accurate calculations for long transients, as well as sensitivity and uncertainty analysis of the reactor. A fictitious radial leakage cross section is introduced in the model to properly account for the radial leakage effects of the reactor. The leakage cross section and other consistent neutronics parameters are generated with the Monte Carlo code Serpent using high-fidelity three-dimensional (3D) models. The accuracy of the 1D consistent model is verified by the reference solution from the Monte Carlo model on the Molten Salt Reactor Experiment (MSRE) configuration. The 1D consistent model successfully reproduced the integrated flux from the 3D model and the reactor multiplication factor k
eff with the error in the range of 95 to 397 pcm (per cent mille), depending on discretized energy group structures. The developed model is also extended to estimate the reactivity loss due to fuel circulation in MSRE. The estimate of reactivity loss in dynamics analysis is in great agreement with the experimental data. This model functions as the first step in the development of a 1D fully neutronics/thermal-hydraulics coupled model for short- and long-term MSRE transient analysis. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
29. Benchmarking Standard and Micromechanical Models for Creep and Shrinkage of Concrete Relevant for Nuclear Power Plants.
- Author
-
Šmilauer, Vít, Dohnalová, Lenka, Jirásek, Milan, Sanahuja, Julien, Seetharam, Suresh, and Babaei, Saeid
- Subjects
EXPANSION & contraction of concrete ,CREEP (Materials) ,NUCLEAR power plants ,YOUNG'S modulus ,STRUCTURAL engineering ,DATABASES - Abstract
The creep and shrinkage of concrete play important roles for many nuclear power plant (NPP) and engineering structures. This paper benchmarks the standard and micromechanical models using a revamped and appended Northwestern University database of laboratory creep and shrinkage data with 4663 data sets. The benchmarking takes into account relevant concretes and conditions for NPPs using 781 plausible data sets and 1417 problematic data sets, which cover together 47% of the experimental data sets in the database. The B3, B4, and EC2 models were compared using the coefficient of variation of error (CoV) adjusted for the same significance for short-term and long-term measurements. The B4 model shows the lowest variations for autogenous shrinkage and basic and total creep, while the EC2 model performs slightly better for drying and total shrinkage. In addition, confidence levels at 5, 10, 90, and 95% are quantified in every decade. Two micromechanical models, Vi(CA) 2 T and SCK CEN, use continuum micromechanics for the mean field homogenization and thermodynamics of the water–pore structure interaction. Validations are carried out for the 28-day Young's modulus of concrete, basic creep compliance, and drying shrinkage of paste and concrete. The Vi(CA) 2 T model is the second best model for the 28-day Young's modulus and the basic creep problematic data sets. The SCK CEN micromechanical model provides good prediction for drying shrinkage. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Robust grasping across diverse sensor qualities: The GraspNet-1Billion dataset.
- Author
-
Fang, Hao-Shu, Gou, Minghao, Wang, Chenxi, and Lu, Cewu
- Subjects
DETECTORS ,ROBOTICS ,POINT cloud ,TACTILE sensors ,GENERALIZATION ,CAMERAS - Abstract
Robust object grasping in cluttered scenes is vital to all robotic prehensile manipulation. In this paper, we present the GraspNet-1Billion benchmark that contains rich real-world captured cluttered scenarios and abundant annotations. This benchmark aims at solving two critical problems for the cluttered scenes parallel-finger grasping: the insufficient real-world training data and the lacking of evaluation benchmark. We first contribute a large-scale grasp pose detection dataset. Two different depth cameras based on structured-light and time-of-flight technologies are adopted. Our dataset contains 97,280 RGB-D images with over one billion grasp poses. In total, 190 cluttered scenes are collected, among which 100 are training set and 90 are for testing. Meanwhile, we build an evaluation system that is general and user-friendly. It directly reports a predicted grasp pose's quality by analytic computation, which is able to evaluate any kind of grasp representation without exhaustively labeling the ground-truth. We further divide the test set into three difficulties to better evaluate algorithms' generalization ability. Our dataset, accessing API and evaluation code, are publicly available at www.graspnet.net. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Multi-Strategy Improved Sand Cat Swarm Optimization: Global Optimization and Feature Selection.
- Author
-
Yao, Liguo, Yang, Jun, Yuan, Panliang, Li, Guanghui, Lu, Yao, and Zhang, Taihua
- Subjects
GLOBAL optimization ,SWARM intelligence ,SAND ,FEATURE selection ,CATS ,LEARNING strategies - Abstract
The sand cat is a creature suitable for living in the desert. Sand cat swarm optimization (SCSO) is a biomimetic swarm intelligence algorithm, which inspired by the lifestyle of the sand cat. Although the SCSO has achieved good optimization results, it still has drawbacks, such as being prone to falling into local optima, low search efficiency, and limited optimization accuracy due to limitations in some innate biological conditions. To address the corresponding shortcomings, this paper proposes three improved strategies: a novel opposition-based learning strategy, a novel exploration mechanism, and a biological elimination update mechanism. Based on the original SCSO, a multi-strategy improved sand cat swarm optimization (MSCSO) is proposed. To verify the effectiveness of the proposed algorithm, the MSCSO algorithm is applied to two types of problems: global optimization and feature selection. The global optimization includes twenty non-fixed dimensional functions (Dim = 30, 100, and 500) and ten fixed dimensional functions, while feature selection comprises 24 datasets. By analyzing and comparing the mathematical and statistical results from multiple perspectives with several state-of-the-art (SOTA) algorithms, the results show that the proposed MSCSO algorithm has good optimization ability and can adapt to a wide range of optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Automatic evaluation of complex alignments: An instance-based approach.
- Author
-
Thiéblin, Elodie, Haemmerlé, Ollivier, and Trojahn, Cássia
- Subjects
TASK analysis ,ONTOLOGIES (Information retrieval) - Abstract
Ontology matching is the task of generating a set of correspondences (i.e., an alignment) between the entities of different ontologies. While most efforts on alignment evaluation have been dedicated to the evaluation of simple alignments (i.e., those linking one single entity of a source ontology to one single entity of a target ontology), the emergence of matchers providing complex alignments (i.e., those composed of correspondences involving logical constructors or transformation functions) requires new strategies for addressing the problem of automatically evaluating complex alignments. This paper proposes (i) a benchmark for complex alignment evaluation composed of an automatic evaluation system that relies on queries and instances, and (ii) a dataset about conference organisation. This dataset is composed of populated ontologies and a set of competency questions for alignment as SPARQL queries. State-of-the-art alignments are evaluated and a discussion on the difficulties of the evaluation task is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Edge AIBench 2.0: Ascalable autonomous vehicle benchmark for IoT--Edge--Cloud systems.
- Author
-
Tianshu Hao, Wanling Gao, Chuanxin Lan, Fei Tang, Zihan Jiang, and Jianfeng Zhan
- Subjects
AUTONOMOUS vehicles ,INTERNET of things ,BENCHMARKING (Management) ,WIRELESS hotspots ,CYBER physical systems - Abstract
Many emerging IoT--Edge--Cloud computing systems are not yet implemented or are too confidential share the code or even tricky to replicate its execution environment, and hence their benchmarking is challenging. This paper uses autonomous vehicles as a typical scenario to build the first benchmark for Edge--Cloudsystems.We propose a set of distilling rules for replicating autonomous vehicle scenarios to extract critical tasks with intertwined interactions. The essential system-level and component-level characteristics captured while the system complexity is reduced significantly so that users can quickly evaluate and pinpoint the system and component bottlenecks. Also, we implement a scalable architecture through which users assess the systems with different sizes of workloads. We conduct several experiments to measure the performance. After testing two thousand autonomous vehicle task requests, we identify the bottleneck modules in autonomous vehicle scenarios and analyze hotspot functions. The experiment results show that the lane-keeping task is the slowest execution module, with a tail latency of 77.49 ms for the 99th percentile latency. We hope this scenario benchmark will helpful for Autonomous Vehicles and even IoT--edge--Cloud research. Now the open-source code is available from the official website https://www.benchcouncil.org/scenariobench/edgeaibench.html. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Benchmark applications used in mobile cloud computing research: a systematic mapping study.
- Author
-
Silva, Francisco, Zaicaner, Germano, Quesado, Eder, Dornelas, Matheus, Silva, Bruno, and Maciel, Paulo
- Subjects
MOBILE computing ,CLOUD computing ,COMPUTER algorithms ,COMPUTER architecture ,DIGITAL mapping - Abstract
Mobile cloud computing (MCC) integrates mobile computing and cloud computing aiming to extend the capabilities of mobile devices through offloading techniques. In MCC, many controlled experiments have been performed using mobile applications as benchmarks. Usually, these applications are used to validate proposed algorithms, architectures or frameworks. The task of choosing a specific benchmark to evaluate MCC proposals is difficult because there is no standard applications list. This paper presents a systematic mapping study for benchmarks used in MCC research. Taking 5 months of work, we have read 763 papers from MCC field. We catalogued the applications and characterized them considering three facets: category (e.g., games, imaging tools); evaluated resource (e.g., time, energy); and platform (e.g., Android, iPhone). The mapping study evidences research gaps and research trends. Providing a list of downloadable standardized benchmarks, this work can aid better choices to guide more reliable research studies since the same application could be used for different scientific purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
35. NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits.
- Author
-
Rosin, Paul L., Lai, Yu-Kun, Mould, David, Yi, Ran, Berger, Itamar, Doyle, Lars, Lee, Seungyong, Li, Chuan, Liu, Yong-Jin, Semmo, Amir, Shamir, Ariel, Son, Minjung, and Winnemöller, Holger
- Subjects
COMPUTER vision ,LEARNING communities - Abstract
Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Dominance Tracking Index for Measuring Pension Fund Performance with Respect to the Benchmark.
- Author
-
Kopa, Milos, Sutiene, Kristina, Kabasinskas, Audrius, Lakstutiene, Ausrine, and Malakauskas, Aidas
- Abstract
This paper focuses on the performance of Lithuanian life-cycle second-pillar pension funds. Every such fund first specifies its benchmark and then attempts to follow the benchmark in some way. This is a form of regulation, meaning that every such fund is somehow regulated and controlled by the central bank authorities. The goal of this paper is twofold: (i) to analyse the returns of the pension funds with respect to their benchmarks and (ii) to determine whether less strict regulation leads to a better outperformance of the fund with respect to the benchmark. In order to achieve this, we introduced a new performance measure called the dominance-tracking index, which combines the ideas of almost stochastic dominance relations and tracking errors. While the tracking error and its modifications measure the strength of the regulation, almost stochastic dominance provides information about preferences between the funds and their benchmarks. Therefore, the new index was constructed in such a way as to take into account both approaches. The empirical section of the study then presents the results separately for the considered pension managers and participants' age groups as usual in the life-cycle pension funds analysis. Finally, by taking into account various periods, we studied the effects of the COVID-19 crisis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Towards Low Light Enhancement With RAW Images.
- Author
-
Huang, Haofeng, Yang, Wenhan, Hu, Yueyu, Liu, Jiaying, and Duan, Ling-Yu
- Subjects
IMAGE intensifiers ,IMAGE processing ,METADATA ,DEEP learning - Abstract
In this paper, we make the first benchmark effort to elaborate on the superiority of using RAW images in the low light enhancement and develop a novel alternative route to utilize RAW images in a more flexible and practical way. Inspired by a full consideration on the typical image processing pipeline, we are inspired to develop a new evaluation framework, Factorized Enhancement Model (FEM), which decomposes the properties of RAW images into measurable factors and provides a tool for exploring how properties of RAW images affect the enhancement performance empirically. The empirical benchmark results show that the Linearity of data and Exposure Time recorded in meta-data play the most critical role, which brings distinct performance gains in various measures over the approaches taking the sRGB images as input. With the insights obtained from the benchmark results in mind, a RAW-guiding Exposure Enhancement Network (REENet) is developed, which makes trade-offs between the advantages and inaccessibility of RAW images in real applications in a way of using RAW images only in the training phase. REENet projects sRGB images into linear RAW domains to apply constraints with corresponding RAW images to reduce the difficulty of modeling training. After that, in the testing phase, our REENet does not rely on RAW images. Experimental results demonstrate not only the superiority of REENet to state-of-the-art sRGB-based methods and but also the effectiveness of the RAW guidance and all components. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Evaluation of Former Yugoslav Republic of Macedonia's Wi-Fi Kiosk Program
- Author
-
Gelvanovska, Natalija, Viatchaninova, Ievgeniia, and Saliu, Artan
- Subjects
RESEARCHER ,RURAL CONNECTIVITY ,INFORMATION REQUESTS ,END USERS ,BANDWIDTH ,GENERAL PUBLIC ,DESCRIPTION ,INTERNET TRAFFIC ,TECHNICAL ISSUES ,TELECOMMUNICATION ,OPINION SURVEY ,ROUTER ,TECHNICAL ASSISTANCE ,WEBSITES ,ANTENNA ,INFORMATION AND COMMUNICATION TECHNOLOGIES ,PENETRATION RATE ,NEXT GENERATION ,GOVERNMENT SUBSIDIES ,SITE ,COMPETITIVENESS ,END-USERS ,LICENSES ,PERFORMANCE INDICATORS ,PROJECT MANAGEMENT ,BROADBAND ACCESS ,ENABLING ENVIRONMENT ,ADMINISTRATIONS ,TELEPHONES ,PROCUREMENT ,PC ,BUSINESS DEVELOPMENT ,IMPACT ASSESSMENT ,E-MAIL ,USER GROUP ,HARDWARE ,WAN ,PERSONAL COMPUTERS ,DOMAINS ,SERVICE DELIVERY ,BASIC ,PDF ,CLASSIFICATION ,DIGITAL DIVIDE ,MOBILE SERVICE ,NUMBER OF USERS ,MANAGEMENT INFORMATION ,MOBILE PHONES ,TECHNOLOGY ACCESS ,ELECTRONIC COMMUNICATIONS NETWORKS ,INTERNET ACCESS ,DIGITAL ,INSTITUTION ,IMPACT ASSESSMENTS ,INFORMATION SYSTEM ,LITERACY ,SUPERVISION ,ACCESS POINTS ,ONLINE AUCTION ,COMMUNICATION TECHNOLOGIES ,ENCRYPTION ,E-GOVERNMENT ,PHONE NUMBER ,INTERNET CONNECTIVITY ,BUSINESS CLIMATE ,ACCESS POINT ,BACKBONE ,MARKETING ,WEB SERVICES ,ECONOMIC DEVELOPMENT ,TIME FRAME ,TELEPHONE ,ISPS ,BROADBAND NETWORKS ,INNOVATION ,DEVELOPMENT OF BROADBAND ,ECONOMIC ACTIVITY ,UNIVERSAL SERVICE OBLIGATION ,ACCESSIBILITY ,DIGITAL DEVELOPMENT ,ELECTRICITY ,INTERNATIONAL TELECOMMUNICATION ,NATIONAL STRATEGY ,ACTION PLAN ,HARMONIZATION ,CONNECTIVITY ,FOREIGN DIRECT INVESTMENTS ,INFORMATICS ,LINUX ,TECHNICAL REQUIREMENTS ,BROADBAND CONNECTIVITY ,ACCESS TO INTERNET ,DEMOCRATIC PROCESSES ,NETWORKING ,BUSINESS ACTIVITIES ,RESULT ,SATELLITE ,PUBLIC ADMINISTRATION ,BROADBAND ,CONNECTION SPEED ,BENCHMARK ,SMART PHONES ,ATLAS ,BROADBAND PENETRATION ,USES ,USER ,IMPLEMENTATION PROCESS ,NETWORKS ,WEB ,DEMOGRAPHIC DATA ,INTERFACE ,GOVERNMENT STAKEHOLDERS ,ADSL ,TELEPHONY ,ENTRY ,EQUIPMENT ,INTERNET SERVICE PROVIDERS ,LICENSE ,TELECOMMUNICATIONS ,WIRELESS SERVICE ,3G ,SOCIAL DEVELOPMENT ,GOVERNMENT EXPENDITURE ,PRIVATE SECTOR ,RESEARCHERS ,VIDEO ,AUTHENTICATION ,MAINTENANCE COSTS ,WHITE PAPER ,WIRELESS ACCESS ,PORNOGRAPHY ,MANAGEMENT SOFTWARE ,TRANSMISSION ,MOBILE PHONE ,KIOSKS ,ELECTRONIC COMMUNICATIONS ,ANTENNAS ,POLICY FRAMEWORK ,FINANCIAL RESOURCES ,MARKET SHARE ,TARGETS ,UNIVERSAL SERVICE ,EDUCATIONAL STANDARDS ,TELEGEOGRAPHY ,GOVERNMENT SERVICES ,INFORMATION SOCIETY ,ISP ,ARTICLE ,WIRELESS ,INSTALLATION ,TELECOMMUNICATIONS SERVICES ,ACCESSION ,INTERNATIONAL TELECOMMUNICATIONS ,COMMUNICATIONS NETWORKS ,INTERNET SERVICES ,RESULTS ,TELECOM ,E-GOVERNMENT SERVICES ,INFRASTRUCTURE DEVELOPMENT ,INTERNET INFRASTRUCTURE ,CAPACITY BUILDING ,ENTERTAINMENT ,ICT ,USER EXPECTATIONS ,COMMUNITIES ,FUNCTIONALITY - Abstract
This white paper has been prepared by World Bank s Transport and Information and Communication Technologies (ICT) Global Practice at the request of the MIOA . Delivery of the White Paper is part of a wider package of technical assistance by the World Bank to the Government of FYR Macedonia. The paper starts off by giving an overview of the state of telecom development in rural FYR Macedonia from the standpoint of affordability and availability of the commercial broadband Internet access services for the less advantaged groups of the population. The next section describes the Wi-Fi Kiosk Project outlining its scope, aim, and implementation process while bringing forward publics experiences with respect to the Wi-Fi kiosk use. This section also examines technical parameters related to the Internet usage and demonstrates the problematic of the kiosk maintenance in the remote and rural areas. Section five references specific policy and regulatory measures designed by different government stakeholders with a goal to analyse the approach which has been chosen to ensure availability of the fixed and (or) mobile broadband Internet in the rural areas of the country. The white paper concludes with a set of observations and recommendations aiming to address the sustainability of the results achieved by the Wi-Fi Kiosk Project and to offer next steps to increase rural connectivity in FYR Macedonia.
- Published
- 2014
39. QSLiMFinder: improved short linear motif prediction using specific query protein data
- Author
-
Nicolas Palopoli, Richard Edwards, and Kieren T. Lythgow
- Subjects
Statistics and Probability ,Computer science ,Sequence analysis ,Amino Acid Motifs ,computer.software_genre ,Biochemistry ,SHORT LINEAR MOTIF ,purl.org/becyt/ford/1 [https] ,03 medical and health sciences ,Sequence Analysis, Protein ,Protein methods ,Protein Interaction Mapping ,Humans ,Protein Interaction Domains and Motifs ,Short linear motif ,Molecular Biology ,Human proteins ,030304 developmental biology ,0303 health sciences ,030302 biochemistry & molecular biology ,BENCHMARK ,purl.org/becyt/ford/1.2 [https] ,Original Papers ,PROTEIN-PROTEIN INTERACTION ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,Ciencias de la Computación e Información ,Motif (music) ,Data mining ,Ciencias de la Información y Bioinformática ,Sequence Analysis ,computer ,CIENCIAS NATURALES Y EXACTAS ,Algorithms ,Software ,SLIM - Abstract
Motivation: The sensitivity of de novo short linear motif (SLiM) prediction is limited by the number of patterns (the motif space) being assessed for enrichment. QSLiMFinder uses specific query protein information to restrict the motif space and thereby increase the sensitivity and specificity of predictions. Results: QSLiMFinder was extensively benchmarked using known SLiM-containing proteins and simulated protein interaction datasets of real human proteins. Exploiting prior knowledge of a query protein likely to be involved in a SLiM-mediated interaction increased the proportion of true positives correctly returned and reduced the proportion of datasets returning a false positive prediction. The biggest improvement was seen if a short region of the query protein flanking the interaction site was known. Availability and implementation: All the tools and data used in this study, including QSLiMFinder and the SLiMBench benchmarking software, are freely available under a GNU license as part of SLiMSuite, at: http://bioware.soton.ac.uk. Contact: richard.edwards@unsw.edu.au Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2015
40. DAC-SDC Low Power Object Detection Challenge for UAV Applications.
- Author
-
Xu, Xiaowei, Zhang, Xinyi, Yu, Bei, Hu, Xiaobo Sharon, Rowen, Christopher, Hu, Jingtong, and Shi, Yiyu
- Subjects
FIELD programmable gate arrays ,GRAPHICS processing units ,SYSTEMS design ,DESIGN conferences - Abstract
The 55th Design Automation Conference (DAC) held its first System Design Contest (SDC) in 2018. SDC'18 features a lower power object detection challenge (LPODC) on designing and implementing novel algorithms based object detection in images taken from unmanned aerial vehicles (UAV). The dataset includes 95 categories and 150k images, and the hardware platforms include Nvidia's TX2 and Xilinx's PYNQ Z1. DAC-SDC'18 attracted more than 110 entries from 12 countries. This paper presents in detail the dataset and evaluation procedure. It further discusses the methods developed by some of the entries as well as representative results. The paper concludes with directions for future improvements. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Measuring and improving performance of clinicians: an application of patient-based records
- Author
-
Dong, Minye, Xiao, Yuyin, Shi, Chenshu, and Li, Guohong
- Published
- 2023
- Full Text
- View/download PDF
42. A Real-World Benchmark Problem for Global Optimization.
- Author
-
Yuriy, Romasevych, Viatcheslav, Loveikin, and Borys, Bakay
- Subjects
BENCHMARK problems (Computer science) ,GLOBAL optimization ,OPTIMIZATION algorithms ,DYNAMICAL systems ,SEARCH algorithms ,METAHEURISTIC algorithms - Abstract
The paper presents the statement of the problem of dynamical system „crane-load" optimal control. The acceleration period is under consideration and control must meet the minimum duration condition as well as load oscillations elimination. The objective function, which ensures the final condition satisfaction, is developed and analyzed in terms of its topology features. It includes three arguments and their searching is the essence of the benchmark problem. Two variants of the problem are proposed with varied objective function parameters. Twelve agent-based optimization algorithms have been applied to find solutions to a bunch of problems. A brief analysis of the performance of the algorithms reveals their weaknesses and advantages. Thus, the proposed real-world problem may be exploited to estimate the optimization algorithms' search performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Liquid Content Detection In Transparent Containers: A Benchmark.
- Author
-
Wu, You, Ye, Hengzhou, Yang, Yaqing, Wang, Zhaodong, and Li, Shuiwang
- Subjects
LIQUIDS ,INDUSTRIALISM ,CONTAINERS ,DRINKING water - Abstract
Various substances that possess liquid states include drinking water, various types of fuel, pharmaceuticals, and chemicals, which are indispensable in our daily lives. There are numerous real-world applications for liquid content detection in transparent containers, for example, service robots, pouring robots, security checks, industrial observation systems, etc. However, the majority of the existing methods either concentrate on transparent container detection or liquid height estimation; the former provides very limited information for more advanced computer vision tasks, whereas the latter is too demanding to generalize to open-world applications. In this paper, we propose a dataset for detecting liquid content in transparent containers (LCDTC), which presents an innovative task involving transparent container detection and liquid content estimation. The primary objective of this task is to obtain more information beyond the location of the container by additionally providing certain liquid content information which is easy to achieve with computer vision methods in various open-world applications. This task has potential applications in service robots, waste classification, security checks, and so on. The presented LCDTC dataset comprises 5916 images that have been extensively annotated through axis-aligned bounding boxes. We develop two baseline detectors, termed LCD-YOLOF and LCD-YOLOX, for the proposed dataset, based on two identity-preserved human posture detectors, i.e., IPH-YOLOF and IPH-YOLOX. By releasing LCDTC, we intend to stimulate more future works into the detection of liquid content in transparent containers and bring more focus to this challenging task. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Comparing Flow-R, Rockyfor3D and RAMMS to Rockfalls from the Mel de la Niva Mountain: A Benchmarking Exercise.
- Author
-
Noël, François, Nordang, Synnøve Flugekvam, Jaboyedoff, Michel, Digout, Michael, Guerin, Antoine, Locat, Jacques, and Matasci, Battista
- Subjects
ROCKFALL ,DATABASE design ,DATA mapping - Abstract
Rockfall simulations are often performed at various levels of detail depending on the required safety margins of rockfall-hazard-related assessments. As a pseudo benchmark, the simulation results from different models can be put side-by-side and compared with reconstructed rockfall trajectories, and mapped deposited block fragments from real events. This allows for assessing the objectivity, predictability, and sensitivity of the models. For this exercise, mapped data of past events from the Mel de la Niva site are used in this paper for a qualitative comparison with simulation results obtained from early calibration stages of the Flow-R 2.0.9, Rockyfor3D 5.2.15 and RAMMS::ROCKFALL 1.6.70 software. The large block fragments, reaching hundreds of megajoules during their fall, greatly exceed the rockfall energies of the empirical databases used for the development of most rockfall models. The comparison for this challenging site shows that the models could be improved and that combining the use of software programs with different behaviors could be a workaround in the interim. The findings also highlight the inconvenient importance of calibrating the simulations on a per-site basis from onsite observations. To complement this process, a back calculation tool is briefly described and provided. This work also emphasizes the need to better understand rockfall dynamics to help improve rebound models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Expediting Trade : Impact Evaluation of an In-House Clearance Program
- Author
-
Fernandes, Ana M., Hillberry, Russell, and Berg, Claudia
- Subjects
MEASURES ,TRADE LIBERALIZATION ,CUSTOMS ,SAMPLES ,CORPORATION ,PREFERENTIAL TREATMENT ,GENERAL EQUILIBRIUM ,WORLD TRADE ,TESTING ,EXPERIMENTS ,EMPLOYMENT ,CRITERIA ,CUSTOMS TERRITORY ,OUTCOMES ,EXPORT GROWTH ,TOURISM ,TRADE FACILITATION ,IMPORT REGIMES ,TRADE BARRIER ,COMPANY ,GUARANTEE ,DISTRIBUTION ,FIRM ,TESTS ,GOODS ,SIMULATION ,DEVELOPMENT RESEARCH ,AVERAGING ,METHODS ,TRADE DATA ,TRADE POLICY ,STANDARDS ,WORLD TRADE ORGANIZATION ,EXPORT PROMOTION ,FIRMS ,DONOR ,FORMAL ANALYSIS ,DEVELOPMENT ,MULTINATIONAL ,EVALUATION ,EXPORT CLEARANCE ,VALIDITY ,OPTIMIZATION ,WELFARE ,INFLUENCE ,CONSUMPTION ,THEORY ,TRADE AGENDA ,DEVELOPMENT POLICY ,TRENDS ,TRADE ,EQUILIBRIUM ,CUSTOMS CLEARANCE ,MULTILATERAL TRADE ,METHODOLOGY ,COSTS ,DEVELOPMENT ASSISTANCE ,RESEARCH ,DONORS ,WTO ,VARIABLES ,TOURISM POLICY ,INTERNATIONAL ECONOMICS ,VALUE ,EXPORTS ,ESTIMATES ,CUSTOMS REGULATIONS ,BENCHMARK ,INTERNATIONAL TRADE ,SEE ,TIME ,VARIABILITY ,TRAVEL ,EFFECTS ,HYPOTHESIS TESTING ,BOYCOTTS ,FORECASTS ,IMPORT DECLARATIONS ,STRATEGIC RESEARCH ,IMPORTS ,ESTIMATING ,DEVELOPING COUNTRIES ,EXPORT SUPPORT ,TECHNIQUES ,BENEFITS ,TRADE VOLUME ,ESTIMATORS ,DEVELOPMENT AGENCIES ,INTEREST ,IMPORT VALUE ,EVALUATION METHODS ,SMALL FIRMS ,SIMULATIONS ,IMPORT VALUES ,SIZE ,ECONOMIC JUSTIFICATION ,IMPORT DECLARATION ,RESEARCH WORKING PAPERS ,WEIGHT ,DATA COLLECTION - Abstract
Despite the importance of trade facilitation as an area of trade and development policy, there have been very few impact evaluations of specific trade facilitation reforms. This paper offers an evaluation of in-house clearance, a reform that allows qualified firms in Serbia to clear customs from within their own warehouse rather than at the customs office. The pooled synthetic control method applied here offers a novel solution to many of the empirical challenges that frustrate efforts to evaluate trade facilitation reforms. The method is used to estimate causal impacts on trade outcomes for 21 firms that adopted in-house clearance for import shipments. The program compressed the distribution of clearance times for adopting firms, but the estimated effects on median clearance times, inspection rates, and import value were not statistically significant. Tests for heterogeneous program impact do not indicate that the program affected adopting firms differently. Overall, the results suggest that the most evident benefit of the program for participating firms is reduced uncertainty about clearance times.
- Published
- 2016
46. Global Migration Revisited : Short-Term Pains, Long-Term Gains, and the Potential of South-South Migration
- Author
-
Ahmed, S. Amer, Go, Delfin S., and Willenbockel, Dirk
- Subjects
TRADE LIBERALIZATION ,AGE POPULATIONS ,ECONOMIC PERFORMANCE ,REAL INCOME ,INVESTMENT ,MIGRANT ,GENERAL EQUILIBRIUM ,VALUE ADDED ,ECONOMIC GROWTH ,BRAIN DRAIN ,IMMIGRANTS ,SKILL LEVEL ,LABOR MIGRATION ,ELASTICITY OF SUBSTITUTION ,CONSEQUENCES OF MIGRATION ,POTENTIAL OUTPUT ,EXTERNALITIES ,EMPLOYMENT ,WAGE DIFFERENTIALS ,MONITORING ,POPULATION ,MIGRANTS ,UNEMPLOYMENT ,INCOME ,PRODUCTIVITY ,MIGRANT-SENDING COUNTRIES ,RESOURCE ALLOCATION ,LABOR PRODUCTIVITY ,WORLD POPULATION ,STOCK ,INCENTIVES ,MIGRATION POLICIES ,GOODS ,POPULATIONS ,INTERNATIONAL MIGRANT ,NATIONAL ORIGIN ,SKILLED WORKERS ,ORGANIZATIONS ,LABOR SUPPLY ,AVERAGE PRODUCTIVITY ,MIGRANT WORKERS ,POLICY DISCUSSIONS ,DEVELOPMENT ECONOMICS ,REMITTANCE ,MARKETS ,POPULATION FACTS ,PUBLIC SERVICES ,LOW-INCOME COUNTRIES ,GROWTH PROJECTIONS ,REAL WAGES ,ECONOMIC COSTS ,DEVELOPMENT ,PRICES ,MIGRANT LABOR ,WAGES ,TRANSFERS ,PURCHASING POWER ,SOCIAL AFFAIRS ,WELFARE ,PROGRESS ,PRODUCTION ,POPULATION DECLINE ,CONSUMPTION LEVELS ,NATURAL RESOURCE ,ELASTICITY ,SKILLED MIGRANTS ,INFLUENCE ,GDP PER CAPITA ,THEORY ,COUNTRY OF ORIGIN ,DEVELOPMENT POLICY ,TRENDS ,MARGINAL PRODUCTIVITY ,TRADE ,EQUILIBRIUM ,LABOR DEMAND ,SUPPLY ,LABOR MOBILITY ,PAYMENTS ,NATIVE WORKERS ,IMPERFECT SUBSTITUTES ,AGRICULTURE ,DEMAND ,DEMOGRAPHIC CHANGE ,ECONOMIC INTEGRATION ,GDP ,HOST COUNTRIES ,LABOUR ,WAGE RATES ,DEVELOPMENT GOALS ,INTERNATIONAL MOVEMENTS ,CAPITAL ,POLITICAL ECONOMY ,ACCOUNTING ,HUMAN DEVELOPMENT ,ECONOMIC IMPLICATIONS ,WORKING- AGE POPULATIONS ,VALUE ,SECURITY ,MIGRANT POPULATIONS ,REMITTANCES ,UNSKILLED LABOR ,PURCHASING POWER PARITY ,POLICIES ,BENCHMARK ,FUTURE GROWTH ,POLICY ,HOST COUNTRY ,ECONOMIC PROJECTIONS ,HUMAN CAPITAL ,EFFECTS ,BENEFITS OF MIGRATION ,EFFICIENCY ,BILATERAL TRADE ,REGIONAL AGGREGATION ,MIGRATION ,WORKING-AGE POPULATIONS ,BENCHMARK DATA ,HOUSEHOLD INCOME ,LABOR FORCES ,RETURN MIGRATION ,SKILLED LABOR ,POLICY RESEARCH ,DEVELOPING COUNTRIES ,REAL GDP ,INTERNATIONAL MIGRATION ,UNSKILLED WORKERS ,KNOWLEDGE ,POLICY RESEARCH WORKING PAPER ,LABOR ,LABOR MARKETS ,WORKFORCE ,MIGRATION FLOWS ,ECONOMIC ANALYSIS ,ECONOMICS ,WAGE INCREASES ,INTERNATIONAL MIGRANTS ,INPUTS ,LABOR EFFICIENCY ,INDUSTRIAL RELATIONS ,GLOBAL DEVELOPMENT ,LABOR FORCE ,IMMIGRATION ,WORKING-AGE POPULATION - Abstract
This paper re-examines the development implications of international migration focusing on two issues: how the costs and benefits of migration change over time, and the significance of South-South migration for development. First, the analysis finds that although greater migration could push down the wages of native workers of advanced countries in the short run, these wages eventually recover. This pattern would be mostly caused by the beneficial effect of additional labor on the real returns on capital and fostering faster capital formation. Additional South-North migration could favor capital income recipients and reduces labor income in host regions in the short run. In contrast, in sending countries, capital owners could experience lower incomes while wages rise. Globally, the welfare gains of new migrants could be expected to exceed the losses of old migrants by a wide margin. The remaining natives in sending countries could enjoy a net increase in remittances as well as an increase in labor income, although income from capital might decline. Second, in a hypothetical scenario with lower South-South migration, the implied losses of remittance income could lead to substantially lower welfare in developing countries. Although the wage differentials among developing countries tend to be smaller relative to their wage differentials with high-income countries, South-South migrants make substantial contributions to remittances.
- Published
- 2016
47. Development Economics as Taught in Developing Countries
- Author
-
Mckenzie, David and Paffhausen, Anna Luisa
- Subjects
RETURNS TO SCALE ,MARGINAL PRODUCT ,DEVELOPING COUNTRY ,ECONOMIC GROWTH ,POOR COUNTRIES ,WAGE DIFFERENTIALS ,DEGREES ,POLICY MAKERS ,UNDERGRADUATES ,GRADUATE LEVEL ,CONVERGENCE HYPOTHESIS ,ECONOMICS ASSOCIATIONS ,UNEMPLOYMENT ,INCOME ,MACROECONOMICS ,POLICY OPTIONS ,WORKERS ,UNDERGRADUATE EDUCATION ,SCIENCE ,UNDERGRADUATE COURSES ,POVERTY RATES ,INCENTIVES ,COURSE SYLLABI ,MASTERS LEVEL ,POVERTY ,AGRICULTURAL PRODUCTIVITY ,INDUSTRIAL DEVELOPMENT ,PER CAPITA INCOME ,GROWTH THEORY ,DEVELOPMENT RESEARCH ,LEARNING OBJECTIVES ,GROWTH ,COLLEGE ,TRADE POLICY ,RAPID GROWTH ,FACULTIES ,PER-CAPITA INCOME ,DEVELOPMENT REPORT ,STUDENTS ,DEVELOPMENT ECONOMICS ,MARKETS ,ECONOMICS RESEARCH ,DEVELOPMENT ,EDUCATION STATISTICS ,SCHOOLS ,FAILURES ,MASTERS DEGREES ,RURAL AREAS ,NATIONAL INCOME ,INCOMPLETE MARKETS ,ANALYTICAL METHODS ,PRODUCTION ,GRADUATE ,RESEARCH OUTPUT ,EMPIRICAL WORK ,ELASTICITY ,LITERACY ,TOTAL FACTOR PRODUCTIVITY GROWTH ,GDP PER CAPITA ,THEORY ,ECONOMIC LITERATURE ,POVERTY REDUCTION ,DEVELOPMENT POLICY ,COURSE CONTENT ,MEASURING POVERTY ,TRADE ,ASYMMETRIC INFORMATION ,TERTIARY EDUCATION ,PER CAPITA INCOMES ,LITERATURE ,EMPIRICAL EVIDENCE ,ECONOMIC DEVELOPMENT ,AGRICULTURE ,DEVELOPED COUNTRIES ,RESEARCH FINDINGS ,PRODUCTIVITY GROWTH ,RESEARCH ,GDP ,VARIABLES ,TEXTBOOKS ,FACULTY ,HUMAN DEVELOPMENT REPORT ,MACROECONOMIC MANAGEMENT ,BASIC KNOWLEDGE ,ComputingMilieux_COMPUTERSANDEDUCATION ,CAPITAL ,POLITICAL ECONOMY ,OPEN ACCESS ,HUMAN DEVELOPMENT ,VALUE ,PAPERS ,COUNTRY LEVEL ,GROWTH WITHOUT DEVELOPMENT ,INDUSTRIAL POLICY ,EXAM QUESTIONS ,BENCHMARK ,INTERNATIONAL TRADE ,STUDENT ,DATA SETS ,EXCHANGE RATE ,HUMAN CAPITAL ,RESEARCH CENTERS ,POVERTY TRAPS ,RESEARCHERS ,FACULTY MEMBERS ,INTERNATIONAL ORGANIZATIONS ,MIDDLE INCOME COUNTRIES ,TEACHING ,PUBLIC POLICY ,ABSOLUTE POVERTY ,LEARNING ,CREDIT ,POLICY RESEARCH ,UNDERGRADUATE LEVEL ,STUDENT LEARNING ,SYLLABI ,GROWTH RATE ,DEVELOPING COUNTRIES ,ENROLLMENT RATIO ,INDEX NUMBERS ,MARKET FAILURES ,SIGNIFICANT CORRELATION ,MICRO DATA ,EDUCATION LEVEL ,LABOR MARKETS ,UNIVERSITIES ,GROWTH MODEL ,UNDERGRADUATE DEGREE PROGRAM ,DEVELOPMENT STRATEGIES ,DATA AVAILABILITY ,ECONOMICS ,CAPITA INCOMES ,DEVELOPMENT INDICATORS ,IMPERFECT COMPETITION ,INCREASING RETURNS ,INSTITUTES ,INPUTS ,CAPITA INCOME ,LABOR FORCE ,PROFESSORS ,TOTAL FACTOR PRODUCTIVITY ,PRODUCTION FUNCTION ,RICH COUNTRIES ,UNDERGRADUATE STUDENTS ,SCHOOL ,URBAN AREAS ,ECONOMIC RESEARCH ,UNIVERSITY ,DEVELOPMENT POLICIES - Abstract
This paper uses a combination of survey questions to instructors and data collected from course syllabi and examinations to examine how the subject of development economics is taught at the undergraduate and masters levels in developing countries, and benchmark this against undergraduate classes in the United States. The study finds that there is considerable heterogeneity in what is considered development economics: there is a narrow core of only a small set of topics such as growth theory, poverty and inequality, human capital, and institutions taught in at least half the classes, with substantial variation in other topics covered. In developing countries, development economics is taught largely as a theoretical subject coupled with case studies, with few courses emphasizing data or empirical methods and findings. This approach contrasts with the approach taken in leading U.S. economics departments and with the evolution of development economics research. The analysis finds that country income per capita, the role of the state in the economy, the education level in the country, and the involvement of the instructor in research are associated with how close a course is to the frontier. The results suggest there are important gaps in how development economics is taught.
- Published
- 2015
48. Long-Term Visual Localization Revisited.
- Author
-
Toft, Carl, Maddern, Will, Torii, Akihiko, Hammarstrand, Lars, Stenborg, Erik, Safari, Daniel, Okutomi, Masatoshi, Pollefeys, Marc, Sivic, Josef, Pajdla, Tomas, Kahl, Fredrik, and Sattler, Torsten
- Subjects
AUGMENTED reality ,VIRTUAL reality ,SINGLE-degree-of-freedom systems ,AUTONOMOUS vehicles ,POSE estimation (Computer vision) ,CAMERAS - Abstract
Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing conditions, including day-night changes, as well as weather and seasonal variations, while providing highly accurate six degree-of-freedom (6DOF) camera pose estimates. In this paper, we extend three publicly available datasets containing images captured under a wide variety of viewing conditions, but lacking camera pose information, with ground truth pose information, making evaluation of the impact of various factors on 6DOF camera pose estimation accuracy possible. We also discuss the performance of state-of-the-art localization approaches on these datasets. Additionally, we release around half of the poses for all conditions, and keep the remaining half private as a test set, in the hopes that this will stimulate research on long-term visual localization, learned local image features, and related research areas. Our datasets are available at visuallocalization.net , where we are also hosting a benchmarking server for automatic evaluation of results on the test set. The presented state-of-the-art results are to a large degree based on submissions to our server. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Benchmarking Machine Learning Algorithms on Blood Glucose Prediction for Type I Diabetes in Comparison With Classical Time-Series Models.
- Author
-
Xie, Jinyu and Wang, Qian
- Subjects
FORECASTING ,TYPE 1 diabetes ,MACHINE learning ,STANDARD deviations ,REGRESSION analysis - Abstract
Objective: This paper aims to compare the performance of several commonly known machine-learning (ML) models versus a classic Autoregression with Exogenous inputs (ARX) model in the prediction of blood glucose (BG) levels using time-series data of patients with Type 1 diabetes (T1D). Methods: The ML algorithms include ML-based regression models and deep learning models such as a vanilla Long-Short-Term-Memory (LSTM) Network and a Temporal Convolution Network (TCN). Evaluations have been conducted with respect to different input features, regression model orders, as well as using the recursive method or direct method for multi-step prediction of BG levels. Prediction performance metrics include the average Root Mean Square Error (RMSE), temporal gain (TG) for early prediction, and the normalized energy of the second-order differences (ESOD) of the predicted time series to reflect risk of false alerts on hypo/hyper glycemia events. Results: The ARX model achieved the lowest average RMSE for both recursive and direct methods, the second highest average TG under the direct method, but with a higher average normalized ESOD than some other models. Conclusion: There was no significant advantage observed from the ML models compared to the classic ARX model in predicting BG levels for T1D, except that TCN's performance was more robust with respect to BG trajectories with spurious oscillations, for which ARX tended to over-predict peak BG values and under-predict valley BG values. Significance: Insight learned from this study could help researchers and clinical practitioners to select appropriate models for BG prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Grounded Affordance from Exocentric View
- Author
-
Luo, Hongchen, Zhai, Wei, Zhang, Jing, Cao, Yang, and Tao, Dacheng
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.