1,076 results on '"Large scale"'
Search Results
2. Full-scale experimental study on wave impacts at stepped revetments
- Author
-
Herbst, Maximilian, Kerpen, Nils B., Schoonees, Talia, and Schlurmann, Torsten
- Published
- 2025
- Full Text
- View/download PDF
3. Biomass, photosynthetic activity, and biomolecule composition in Chlorella fusca (Chlorophyta) cultured in a raceway pond operated under greenhouse conditions
- Author
-
Silva, Jaqueline Carmo, Quirós, Sandra Escalante, Lombardi, Ana Teresa, and Figueroa, Félix Lopez
- Published
- 2023
- Full Text
- View/download PDF
4. Using multi-sourced big data to correlate sleep deprivation and road traffic noise: A US county-level ecological study
- Author
-
Tong, Huan, Warren, Joshua L., Kang, Jian, and Li, Mingxiao
- Published
- 2023
- Full Text
- View/download PDF
5. Global convergence in a modified RMIL-type conjugate gradient algorithm for nonlinear systems of equations and signal recovery
- Author
-
Yan Xia and Songhua Wang
- Subjects
nonlinear systems of equations ,conjugate gradient method ,large scale ,global convergence ,signal recovery ,Mathematics ,QA1-939 ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
This paper proposes a modified Rivaie-Mohd-Ismail-Leong (RMIL)-type conjugate gradient algorithm for solving nonlinear systems of equations with convex constraints. The proposed algorithm offers several key characteristics: (1) The modified conjugate parameter is non-negative, thereby enhancing the proposed algorithm's stability. (2) The search direction satisfies sufficient descent and trust region properties without relying on any line search technique. (3) The global convergence of the proposed algorithm is established under general assumptions without requiring the Lipschitz continuity condition for nonlinear systems of equations. (4) Numerical experiments indicated that the proposed algorithm surpasses existing similar algorithms in both efficiency and stability, particularly when applied to large scale nonlinear systems of equations and signal recovery problems in compressed sensing.
- Published
- 2024
- Full Text
- View/download PDF
6. An Efficient and Stable Registration Framework for Large Point Clouds at Two Different Moments.
- Author
-
Zhao, Guangxin, Li, Jinlong, Xi, Jingyi, and Luo, Lin
- Subjects
- *
POINT cloud , *STATISTICAL sampling , *FEATURE extraction , *DEEP learning , *SAMPLING methods , *RECORDING & registration - Abstract
Point cloud registration plays a great role in many application scenarios; however, the registration of large-scale point clouds for actual different moments suffers from the problems of low efficiency, low accuracy, and a lack of stability. In this paper, we propose a registration framework for large-scale point clouds at different moments, which firstly downsamples large-scale point clouds using a random sampling method, then performs a random expansion strategy to make up for the loss of information caused by the random sampling, then completes the first registration by a deep learning network based on the extraction of keypoints and feature descriptors in combination with RANSAC, and finally completes the registration using the point-to-point ICP method. We conducted validation experiments and application experiments on large-scale point clouds of key train components, and the experimental results are much higher in accuracy or efficiency than other methods, which proves the effectiveness of our framework, which can be applied to actual large-scale point clouds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Global convergence in a modified RMIL-type conjugate gradient algorithm for nonlinear systems of equations and signal recovery.
- Author
-
Xia, Yan and Wang, Songhua
- Subjects
STOCHASTIC convergence ,NONLINEAR systems ,CONJUGATE gradient methods ,MATHEMATICAL formulas ,NUMERICAL analysis - Abstract
This paper proposes a modified Rivaie-Mohd-Ismail-Leong (RMIL)-type conjugate gradient algorithm for solving nonlinear systems of equations with convex constraints. The proposed algorithm offers several key characteristics: (1) The modified conjugate parameter is non-negative, thereby enhancing the proposed algorithm's stability. (2) The search direction satisfies sufficient descent and trust region properties without relying on any line search technique. (3) The global convergence of the proposed algorithm is established under general assumptions without requiring the Lipschitz continuity condition for nonlinear systems of equations. (4) Numerical experiments indicated that the proposed algorithm surpasses existing similar algorithms in both efficiency and stability, particularly when applied to large scale nonlinear systems of equations and signal recovery problems in compressed sensing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks.
- Author
-
Urbizagastegui, Pablo, van Schaik, André, and Runchun Wang
- Subjects
ARTIFICIAL neural networks ,RECOLLECTION (Psychology) ,NEUROPLASTICITY ,DIGITAL computer simulation ,SYNAPSES - Abstract
This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burstmode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Hybrid segmented manufacturing of metal part by integrating wire arc additive manufacturing and machining.
- Author
-
Li, Benquan, Ravichander, Bharath Bhushan, Kumar, Golden, and Li, Wei
- Subjects
- *
ALUMINUM wire , *COMMODITY futures , *HARDNESS testing , *SUBSTRATES (Materials science) , *TENSILE tests - Abstract
Additive manufacturing (AM) has been a significant manufacturing technology and adopted in various industries for years. Although different types of AM technologies are developed to fulfill people's customed manufacturing scenarios, the ability of AM to fabricate large scale mechanical parts is still limited, since there are specific maximum values of building volumes of most AM platforms. In this paper, we proposed a segment-by-segment printing strategy to print components with large scale using a robot-based Wire Arc Additive Manufacturing (WAAM) manufacturing and machining platform. The consumed wire was Aluminum 5356 (Al5356). In this project, three segments were sequentially printed on an aluminum alloy substrate in a non-enclosed working platform. After printing, a series of following tests such as hardness testing, tensile testing, and sectional observation were conducted. It was found that the printed segments could be deposited to the previous segments and combined into a continuous single part without any cracks or cavity at the jointed areas. Also, the mechanical properties revealed through these tests suggested that the welding joints between segments were as strong as normal areas. Thus, the proposed segment-by-segment strategy can be extended to 3D print large-scale metal components in the future more efficiently and will provide a feasible solution to break the limitation of the maximum building volume of AM devices which used to be a challenge in previous studies about large-scale component printing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Insights into the Effects of Tile Size and Tile Overlap Levels on Semantic Segmentation Models Trained for Road Surface Area Extraction from Aerial Orthophotography.
- Author
-
Cira, Calimanut-Ionut, Manso-Callejo, Miguel-Ángel, Alcarria, Ramon, Iturrioz, Teresa, and Arranz-Justel, José-Juan
- Subjects
- *
PAVEMENTS , *BORDERLANDS , *ORTHOPHOTOGRAPHY , *DEEP learning , *SURFACE area - Abstract
Studies addressing the supervised extraction of geospatial elements from aerial imagery with semantic segmentation operations (including road surface areas) commonly feature tile sizes varying from 256 × 256 pixels to 1024 × 1024 pixels with no overlap. Relevant geo-computing works in the field often comment on prediction errors that could be attributed to the effect of tile size (number of pixels or the amount of information in the processed image) or to the overlap levels between adjacent image tiles (caused by the absence of continuity information near the borders). This study provides further insights into the impact of tile overlaps and tile sizes on the performance of deep learning (DL) models trained for road extraction. In this work, three semantic segmentation architectures were trained on data from the SROADEX dataset (orthoimages and their binary road masks) that contains approximately 700 million pixels of the positive "Road" class for the road surface area extraction task. First, a statistical analysis is conducted on the performance metrics achieved on unseen testing data featuring around 18 million pixels of the positive class. The goal of this analysis was to study the difference in mean performance and the main and interaction effects of the fixed factors on the dependent variables. The statistical tests proved that the impact on performance was significant for the main effects and for the two-way interaction between tile size and tile overlap and between tile size and DL architecture, at a level of significance of 0.05. We provide further insights and trends in the predictions of the extensive qualitative analysis carried out with the predictions of the best models at each tile size. The results indicate that training the DL models on larger tile sizes with a small percentage of overlap delivers better road representations and that testing different combinations of model and tile sizes can help achieve a better extraction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A Comprehensive Comparison of Stable and Unstable Area Sampling Strategies in Large-Scale Landslide Susceptibility Models Using Machine Learning Methods.
- Author
-
Sinčić, Marko, Bernat Gazibara, Sanja, Rossi, Mauro, Krkač, Martin, and Mihalić Arbanas, Snježana
- Subjects
- *
LANDSLIDE hazard analysis , *MACHINE learning , *DIGITAL elevation models , *SUPPORT vector machines , *RANDOM forest algorithms , *LANDSLIDES - Abstract
This paper focuses on large-scale landslide susceptibility modelling in NW Croatia. The objective of this research was to provide new insight into stable and unstable area sampling strategies on a representative inventory of small and shallow landslides mainly occurring in soil and soft rock. Four strategies were tested for stable area sampling (random points, stable area polygon, stable polygon buffering and stable area centroid) in combination with four strategies for unstable area sampling (landslide polygon, smoothing digital terrain model derived landslide conditioning factors, polygon buffering and landslide centroid), resulting in eight sampling scenarios. Using Logistic Regression, Neural Network, Random Forest and Support Vector Machine algorithm, 32 models were derived and analysed. The main conclusions reveal that polygon sampling of unstable areas is an imperative in large-scale modelling, as well as that subjective and/or biased stable area sampling leads to misleading models. Moreover, Random Forest and Neural Network proved to be more favourable methods (0.804 and 0.805 AUC, respectively), but also showed extreme sensitivity to the tested sampling strategies. In the comprehensive comparison, the advantages and disadvantages of 32 derived models were analysed through quantitative and qualitative parameters to highlight their application to large-scale landslide zonation. The results yielded by this research are beneficial to the susceptibility modelling step in large-scale landslide susceptibility assessments as they enable the derivation of more reliable zonation maps applicable to spatial and urban planning systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Evaluating Tree Species Mapping: Probability Sampling Validation of Pure and Mixed Species Classes Using Convolutional Neural Networks and Sentinel-2 Time Series.
- Author
-
Schadauer, Tobias, Karel, Susanne, Loew, Markus, Knieling, Ursula, Kopecky, Kevin, Bauerhansl, Christoph, Berger, Ambros, Graeber, Stephan, and Winiwarter, Lukas
- Subjects
- *
THEMATIC maps , *CONVOLUTIONAL neural networks , *FOREST biodiversity , *REMOTE-sensing images , *FOREST surveys - Abstract
The accurate large-scale classification of tree species is crucial for the monitoring, protection, and management of the Earth's invaluable forest ecosystems. Numerous previous studies have recognized the suitability of satellite imagery, particularly Sentinel-2 imagery, for this task. In this study, we utilized a dense phenology Sentinel-2 time series, which offered consistent data across multiple granules, to map tree species across the entire forested area in Austria. Aiming for the classification scheme to more accurately represent actual forest conditions, we included mixed tree species and sparsely populated classes (classes with sparse canopy cover) alongside pure tree species classes. To enhance the training data for the mixed and sparse classes, synthetic data creation was employed. Autocorrelation has significant implications for the validation of thematic maps. To investigate the impact of spatial dependency on validation data, two methods were employed at numerous split and buffer distances: spatial split validation and a validation method based on a buffered ground reference probability samples provided by the National Forest inventory (NFI). While a random training data holdout set yielded 99% accuracy, the spatial split validation resulted in 74% accuracy, emphasizing the importance of accounting for spatial autocorrelation when validating with holdout sets derived from polygon-based training data. The validation based on NFI data resulted in 55% overall accuracy, 91% post-hoc pure class accuracy, and 79% accuracy when confusions in phenological proximity were disregarded (e.g., spruce–larch confused with spruce). The significant differences in accuracy observed between spatial split and NFI validation underscore the challenge for polygon-based training data to capture ground reference forest complexity, particularly in areas with diverse forests. This hardship is further accentuated by the pure class accuracy of 91%, revealing the substantial impact of mixed stands on the accuracy of tree species maps. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A Multi-Local Search-Based SHADE for Wind Farm Layout Optimization.
- Author
-
Yang, Yifei, Tao, Sichen, Li, Haotian, Yang, Haichuan, and Tang, Zheng
- Subjects
OPTIMIZATION algorithms ,CONSTRAINT algorithms ,DIFFERENTIAL evolution ,WIND power plants ,GENETIC algorithms - Abstract
Wind farm layout optimization (WFLO) is focused on utilizing algorithms to devise a more rational turbine layout, ultimately maximizing power generation efficiency. Traditionally, genetic algorithms have been frequently employed in WFLO due to the inherently discrete nature of the problem. However, in recent years, researchers have shifted towards enhancing continuous optimization algorithms and incorporating constraints to address WFLO challenges. This approach has shown remarkable promise, outperforming traditional genetic algorithms and gaining traction among researchers. To further elevate the performance of continuous optimization algorithms in the context of WFLO, we introduce a multi-local search-based SHADE, termed MS-SHADE. MS-SHADE is designed to fine-tune the trade-off between convergence speed and algorithmic diversity, reducing the likelihood of convergence stagnation in WFLO scenarios. To assess the effectiveness of MS-SHADE, we employed a more extensive and intricate wind condition model in our experiments. In a set of 16 problems, MS-SHADE's average utilization efficiency improved by 0.14% compared to the best algorithm, while the optimal utilization efficiency increased by 0.3%. The results unequivocally demonstrate that MS-SHADE surpasses state-of-the-art WFLO algorithms by a significant margin. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Noncontact Dynamic Three-Component Displacement Measurement with a Dual Stereovision–Enabled Uncrewed Aerial System.
- Author
-
Perry, Brandon J., Guo, Yanlin, and Atadero, Rebecca
- Subjects
- *
IMPACT loads , *DISPLACEMENT (Mechanics) , *SINGLE-degree-of-freedom systems , *BLAST effect , *TRANSLATIONAL motion , *ROTATIONAL motion , *SENSOR placement - Abstract
Measuring the dynamic displacements of a structure provides a comprehensive understanding of the structure, especially when subjected to different types of dynamic loading (i.e., wind, traffic, impact loads, blast loads, etc.). Despite their usefulness, direct displacement measurements are typically not collected due to the cumbersome logistical issues of sensor placement and maintenance and the impracticality of instrumenting contact-based sensors across all significant structures. In this context, this study proposes a novel dual stereovision technique to measure the dynamic displacement of structures using a portable, noncontact measurement system that involves an uncrewed aerial system (UAS) and four optical cameras. One pair of cameras tracks the three-component (x , y , and z) motion of a region of interest (ROI) on a structure with respect to the UAS system, and the other pair of cameras measure the six degrees of freedom motion (6-DOF) (both rotational and translational motion) of the UAS system by tracking a stationary reference. The motion of the UAS is then compensated for to recover the true dynamic displacement of the ROI. The proposed dual stereovision technique realizes simultaneous measurement of all three components of displacements of the structure and 6-DOF of UAS motion through a mathematically elegant process. The unique dual stereovision technique allows flexibility in choosing a global reference coordinate system, greatly enhancing the feasibility of applying the new technology in various field environments. This new technique has overcome the major challenge of significant UAS motions in full-scale applications. Furthermore, this technique relies on natural features and eliminates the requirement of artificial targets on the structure, permitting applications to difficult-to-access structures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Development and Sensitivity Analysis of an Improved Harmony Search Algorithm with a Multiple Memory Structure for Large-Scale Optimization Problems in Water Distribution Networks.
- Author
-
Lee, Ho-Min, Sadollah, Ali, Choi, Young-Hwan, Joo, Jin-Gul, and Yoo, Do-Guen
- Abstract
The continuous supply of drinking water for human life is essential to ensure the sustainability of cities, society, and the environment. At a time when water scarcity is worsening due to climate change, the construction of an optimized water supply infrastructure is necessary. In this study, an improved version of the Harmony Search Algorithm (HSA), named the Maisonette-type Harmony Search Algorithm (MTHSA), was developed. Unlike the HSA, the MTHSA has a two-floor structure, which increases the optimizing efficiency by employing multiple explorations on the first floor and additional exploitations of excellent solutions. Parallel explorations enhance the ability in terms of exploration (global search), which is the tendency to uniformly explore the entire search space. Additional exploitations among excellent solutions also enhance the ability of local searches (effective exploitation), which is the intensive exploration of solutions that seem to have high possibilities. Following the development of the improved algorithm, it was applied to water distribution networks in order to verify its efficiency, and the numerical results were analyzed. Through the considered applications, the improved algorithm is shown to be highly efficient when applied to large-scale optimization problems with large numbers of decision variables, as shown in comparison with the considered optimizers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. In Situ Growth Method for Large-Area Flexible Perovskite Nanocrystal Films.
- Author
-
Zhou, Xingting, Xu, Bin, Zhao, Xue, Lv, Hongyu, Qiao, Dongyang, Peng, Xing, Shi, Feng, Chen, Menglu, and Hao, Qun
- Subjects
- *
VAPOR-plating , *SILICON detectors , *MAGNESIUM ions , *SPIN coating , *METAL halides - Abstract
Metal halide perovskites have shown unique advantages compared with traditional optoelectronic materials. Currently, perovskite films are commonly produced by either multi-step spin coating or vapor deposition techniques. However, both methods face challenges regarding large-scale production. Herein, we propose a straightforward in situ growth method for the fabrication of CsPbBr3 nanocrystal films. The films cover an area over 5.5 cm × 5.5 cm, with precise thickness control of a few microns and decent uniformity. Moreover, we demonstrate that the incorporation of magnesium ions into the perovskite enhances crystallization and effectively passivates surface defects, thereby further enhancing luminous efficiency. By integrating this approach with a silicon photodiode detector, we observe an increase in responsivity from 1.68 × 10−2 A/W to 3.72 × 10−2 A/W at a 365 nm ultraviolet wavelength. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. An Industrial Perspective for Sustainable Polypropylene Plastic Waste Management via Catalytic Pyrolysis—A Technical Report.
- Author
-
Chasioti, Andromachi and Zabaniotou, Anastasia
- Abstract
Recycling plastics on an industrial scale is a key approach to the circular economy. This study presents a techno-economic analysis aimed at recycling polypropylene waste, one of the main consumer plastics. Specifically, it evaluates the technical and economic feasibility of achieving a large-scale cracking process that converts polypropylene waste into an alternative fuel. Pyrolysis is considered as a promising technique to convert plastic waste into liquid oil and other value-added products, with a dual benefit of recovering resources and providing a zero-waste solution. This study concerns a fast catalytic pyrolysis in a fluidized bed reactor, with the presence of a fluid catalytic cracking catalyst of low acidity for high heat transmission, for an industrial plant with a capacity of 1 t/h of polypropylene waste provided by the Greek Petroleum Industry. From the international literature, the operational conditions were chosen pyrolysis temperature at 430 °C, pressure at 1atm, heating rate at 5 °C/min, and yields of products to 71, 14, and 15 wt.%, for liquid fuel, gas, solid product, respectively. The plant design includes a series of apparatuses, with the main one to be the pyrolyzer. The catalytic method is selected over the non-catalytic because the presence of catalyst increases the quantity and quality of the liquid product, which is the main product of the plant. The energy loops of recycling pyrolysis gas and char as a low-carbon fuel in the plant were considered. The production cost, annual revenue, for 2023, are anticipated to reach €13.7 million (115 €/t) and €15 million (15 €/t), respectively, with an estimated investment equal to €5.3 million. The Payback Time is estimated to 2.4 years to recover the cost of investment. The endeavor is rather economically sustainable. A critical parameter for large scale systems is securing feedstock with low or negligible price. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Multiscale Interactions between Local Short- and Long-Term Spatio-Temporal Mechanisms and Their Impact on California Wildfire Dynamics.
- Author
-
Afolayan, Stella, Mekonnen, Ademe, Gamelin, Brandi, and Lin, Yuh-Lang
- Subjects
- *
WEATHER , *CALIFORNIA wildfires , *FIRE weather , *SOUTHERN oscillation , *WILDFIRES ,EL Nino - Abstract
California has experienced a surge in wildfires, prompting research into contributing factors, including weather and climate conditions. This study investigates the complex, multiscale interactions between large-scale climate patterns, such as the Boreal Summer Intraseasonal Oscillation (BSISO), El Niño Southern Oscillation (ENSO), and the Pacific Decadal Oscillation (PDO) and their influence on moisture and temperature fluctuations, and wildfire dynamics in California. The combined impacts of PDO and BSISO on intraseasonal fire weather changes; the interplay between fire weather index (FWI), relative humidity, vapor pressure deficit (VPD), and temperature in assessing wildfire risks; and geographical variations in the relationship between the FWI and climatic factors within California are examined. The study employs a multi-pronged approach, analyzing wildfire frequency and burned areas alongside climate patterns and atmospheric conditions. The findings reveal significant variability in wildfire activity across different climate conditions, with heightened risks during specific BSISO phases, La-Niña, and cool PDO. The influence of BSISO varies depending on its interaction with PDO. Temperature, relative humidity, and VPD show strong predictive significance for wildfire risks, with significant relationships between FWI and temperature in elevated regions (correlation, r > 0.7, p ≤ 0.05) and FWI and relative humidity along the Sierra Nevada Mountains (r ≤ −0.7, p ≤ 0.05). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. An effective controller design strategy for higher order systems.
- Author
-
Chandramouleeswaran, Ganesh, Deivanayagam Pillai, Nagarajan, Ramasamy, Shanmugasundaram, Pudupalayam Sachithanandam, Mayurappriyan, and Iqbal, Mustafa Mohamed
- Subjects
- *
CONTINUOUS time systems , *REDUCED-order models - Abstract
The recent methods that guarantee the stability of the reduced-order models are Mihailov stability, improved Pade approximation and truncation, improved generalized pole clustering, moment matching and salp swarm optimization. Further, these methods could overcome the limitations such as non-uniqueness, pole clustering, gain adjustment and difficulty to maintain the dominant roots in the lower order system for non-minimum higher order plants. This research emphasizes the design of the proposed compensating algorithm by using an additional open loop zero for the stable reduced-order models of large-scale single-input single-output linear time-invariant continuous time systems. The results of the proposed algorithm are compared with the existing compensating methods and the design is validated and illustrated numerically. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A large-scale extraction framework for mapping urban in-formal settlements using remote sensing and semantic segmentation.
- Author
-
Yanan Zhang, Chen Lu, Jiao Wang, and Fuguang Du
- Subjects
- *
REMOTE sensing , *URBAN planning , *METROPOLIS , *CITIES & towns , *RESIDENTIAL areas , *IMAGE segmentation - Abstract
Urban informal settlements (UISs) are densely populated and poorly developed residential areas in urban areas. The mapping of UISs using remote sensing is crucial for urban planning and management. However, the large-scale extraction of UISs is impeded by the labor-intensive task of collecting numerous training samples and the lack of automatic and effective city partition. To overcome these challenges, we proposed a large-scale extraction framework for UISs based on semantic segmentation of highresolution remote sensing images. Utilizing Deeplab V3 Plus as the foundational extraction model, the proposed framework introduces fast sample collection based on GLCM features. Besides, an automatic city partition approach combined with clustering and fine-tuning was proposed to enhance the performance on extracting a specific category of UISs. The results of the case study conducted in 36 major Chinese cities show that the proposed framework achieved good performance, with an overall F1 score of 85.76%. Furthermore, comparative assessments were performed to demonstrate the effectiveness of automatic city partition. The proposed framework offers a practical approach for the largescale extraction of UISs, which holds great significance for sustainable development, poverty estimation, infrastructure construction, and urban planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks
- Author
-
Pablo Urbizagastegui, André van Schaik, and Runchun Wang
- Subjects
synaptic plasticity ,large scale ,neuromorphic computing ,digital simulation ,memory architecture ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burst mode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint.
- Published
- 2024
- Full Text
- View/download PDF
22. Adjoint Algorithm Design of Selective Mode Reflecting Metastructure for BAL Applications.
- Author
-
Li, Zean, Zhang, Xunyu, Qiu, Cheng, Xu, Yingshuai, Zhou, Zhipeng, Wei, Ziyuan, Qiao, Yiman, Chen, Yongyi, Wang, Yubing, Liang, Lei, Lei, Yuxin, Song, Yue, Jia, Peng, Zeng, Yugang, Qin, Li, Ning, Yongqiang, and Wang, Lijun
- Subjects
- *
ALGORITHMS , *ENERGY transfer , *DEGREES of freedom - Abstract
Broad-area lasers (BALs) have found applications in a variety of crucial fields on account of their high output power and high energy transfer efficiency. However, they suffer from poor spatial beam quality due to multi-mode behavior along the waveguide transverse direction. In this paper, we propose a novel metasurface waveguide structure acting as a transverse mode selective back-reflector for BALs. In order to effectively inverse design such a structure, a digital adjoint algorithm is introduced to adapt the considerably large design area and the high degree of freedom. As a proof of the concept, a device structure with a design area of 40 × 20 μm2 is investigated. The simulation results exhibit high fundamental mode reflection (above 90%), while higher-order transverse mode reflections are suppressed below 0.2%. This is, to our knowledge, the largest device structure designed based on the inverse method. We exploited such a device and the method and further investigated the device's robustness and feasibility of the inverse method. The results are elaborately discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Validity, feasibility, and effectiveness of a voice‐recognition based digital cognitive screener for dementia and mild cognitive impairment in community‐dwelling older Chinese adults: A large‐scale implementation study.
- Author
-
Zhao, Xuhao, Wen, Haoxuan, Xu, Guohai, Pang, Ting, Zhang, Yaping, He, Xindi, Hu, Ruofei, Yan, Ming, Chen, Christopher, Wu, Xifeng, and Xu, Xin
- Abstract
INTRODUCTION: We investigated the validity, feasibility, and effectiveness of a voice recognition‐based digital cognitive screener (DCS), for detecting dementia and mild cognitive impairment (MCI) in a large‐scale community of elderly participants. METHODS: Eligible participants completed demographic, cognitive, functional assessments and the DCS. Neuropsychological tests were used to assess domain‐specific and global cognition, while the diagnosis of MCI and dementia relied on the Clinical Dementia Rating Scale. RESULTS: Among the 11,186 participants, the DCS showed high completion rates (97.5%) and a short administration time (5.9 min) across gender, age, and education groups. The DCS demonstrated areas under the receiver operating characteristics curve (AUCs) of 0.95 and 0.83 for dementia and MCI detection, respectively, among 328 participants in the validation phase. Furthermore, the DCS resulted in time savings of 16.2% to 36.0% compared to the Mini‐Mental State Examination (MMSE) and Montral Cognitive Assessment (MoCA). DISCUSSION: This study suggests that the DCS is an effective and efficient tool for dementia and MCI case‐finding in large‐scale cognitive screening. Highlights: To our best knowledge, this is the first cognitive screening tool based on voice recognition and utilizing conversational AI that has been assessed in a large population of Chinese community‐dwelling elderly.With the upgrading of a new multimodal understanding model, the DCS can accurately assess participants' responses, including different Chinese dialects, and provide automatic scores.The DCS not only exhibited good discriminant ability in detecting dementia and MCI cases, it also demonstrated a high completion rate and efficient administration regardless of gender, age, and education differences.The DCS is economically efficient, scalable, and had a better screening efficacy compared to the MMSE or MoCA, for wider implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Identifying and mapping the spatial distribution of regions prone to snowmelt flood hazards in the arid region of Central Asia: A case study in Xinjiang, China.
- Author
-
Liu, Yan, Zhang, Jun min, Huo, Hong, Li, Yang, Lu, Xin yu, Wang, Ni, and Yang, Yun
- Subjects
FLOOD risk ,SNOWMELT ,ARID regions ,HAZARD mitigation ,FLOOD control ,FLOOD warning systems ,RISK managers ,FLOODS - Abstract
Snowmelt floods are highly hazardous meteorological disasters that can potentially threaten human lives and property. Hence, snowmelt susceptibility mapping (SSM) plays an important role in flood prevention systems and aids emergency responders and flood risk managers. In this paper, a method of identifying snowmelt flood hazards is proposed, and a large‐scale snowmelt flood hazard zonation scheme based on historical recordings and multisource remote sensing data is established. To assess the quality of our approach, the proposed model was tested in the cold and arid region of Xinjiang, China. Overall, 140 historical snowmelt flood events and 27 explanatory factors were selected to construct a geospatial dataset for SSM of the contemporary period. GridSearchCV was used to comprehensively search the candidate parameters from the grid of given parameters obtained with the random forest (RF) algorithm. Then, the geospatial dataset was divided into two subsets: 70% for training and 30% for testing. Next, SSM results were obtained with the RF algorithm using optimized parameters. The results indicate that our optimized RF classifier performs well for the task of SSM, with a high AUC value (0.975) for the test dataset. The validation and analysis suggest that the proposed method can efficiently identify snowmelt flood hazards in undersampled arid areas at a regional scale. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Seed sowing shifts native–exotic richness relationships in favor of natives during restoration.
- Author
-
Bassett, Tyler J., Grman, Emily, and Brudvig, Lars A.
- Subjects
NATIVE species ,PRAIRIES ,RESTORATION ecology ,INTRODUCED species ,PRESCRIBED burning ,FARMS ,SOWING ,PLANT productivity - Abstract
A central goal of ecological restoration is to promote diverse ecosystems dominated by native species, but restorations are often plagued by exotic species. A better understanding of factors underlying positive correlations between native and exotic species richness, a pattern that is nearly ubiquitous at large scales in plant communities, may help managers modify these correlations to favor native plant species during restoration. Across 29 tallgrass prairie sites restored through seed sowing onto former agricultural lands, we examined whether the relationship between native and exotic richness is (1) altered by management, such as seed additions and prescribed fire; (2) controlled instead by environmental conditions and successional processes; or (3) altered by management in certain environments and not in others. As is commonly found, native and exotic richness were positively correlated at large scales (i.e., across sites) in this study. Management actions explained much of the remaining variation in native richness, while environmental conditions explained very little. Sites sown with more species at higher seeding rates, especially forb species, had higher native richness than predicted by the native–exotic richness relationship. In contrast, native richness was lower in older restorations than predicted by the native–exotic richness relationship, because native richness, and not exotic richness, declined with restoration age. We show that management actions such as seed sowing can modify the native–exotic richness relationship to favor native species during restoration. The development of management actions that mitigate native species richness declines over time will further benefit native species restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Seed sowing shifts native–exotic richness relationships in favor of natives during restoration
- Author
-
Tyler J. Bassett, Emily Grman, and Lars A. Brudvig
- Subjects
biodiversity ,invasion ,large scale ,native diversity ,native–exotic richness relationship ,prairie restoration ,Ecology ,QH540-549.5 - Abstract
Abstract A central goal of ecological restoration is to promote diverse ecosystems dominated by native species, but restorations are often plagued by exotic species. A better understanding of factors underlying positive correlations between native and exotic species richness, a pattern that is nearly ubiquitous at large scales in plant communities, may help managers modify these correlations to favor native plant species during restoration. Across 29 tallgrass prairie sites restored through seed sowing onto former agricultural lands, we examined whether the relationship between native and exotic richness is (1) altered by management, such as seed additions and prescribed fire; (2) controlled instead by environmental conditions and successional processes; or (3) altered by management in certain environments and not in others. As is commonly found, native and exotic richness were positively correlated at large scales (i.e., across sites) in this study. Management actions explained much of the remaining variation in native richness, while environmental conditions explained very little. Sites sown with more species at higher seeding rates, especially forb species, had higher native richness than predicted by the native–exotic richness relationship. In contrast, native richness was lower in older restorations than predicted by the native–exotic richness relationship, because native richness, and not exotic richness, declined with restoration age. We show that management actions such as seed sowing can modify the native–exotic richness relationship to favor native species during restoration. The development of management actions that mitigate native species richness declines over time will further benefit native species restoration.
- Published
- 2024
- Full Text
- View/download PDF
27. Industrial Distillation Aspects of Diketene
- Author
-
Mehmet Ogün Biçer, Erik von Harbou, Andreas Klein, Hilke-Marie Lorenz, and Christoph Taeschler
- Subjects
Diketene ,Distillation ,Industrial ,Large scale ,Process safety ,Thermal stability ,Chemistry ,QD1-999 - Abstract
Large-scale distillation is a challenge in many respects. Particularly difficult is the purification by distillation of a compound with limited thermal stability. This article describes various aspects of these difficulties with some possible solutions. Special emphasis is placed on the collaboration of different disciplines to find pragmatic solutions to these challenges. The purification of diketene in quantities of several 1000 ta–1 is an excellent example to illustrate the different requirements. Although the distillation of diketene has been carried out by several companies for many years, there are still some aspects that deserve special attention.
- Published
- 2024
- Full Text
- View/download PDF
28. Identifying and mapping the spatial distribution of regions prone to snowmelt flood hazards in the arid region of Central Asia: A case study in Xinjiang, China
- Author
-
Yan Liu, Jun min Zhang, Hong Huo, Yang Li, Xin yu Lu, Ni Wang, and Yun Yang
- Subjects
large scale ,multisource remotely sensed data ,random forest algorithm ,snowmelt susceptibility mapping ,River protective works. Regulation. Flood control ,TC530-537 ,Disasters and engineering ,TA495 - Abstract
Abstract Snowmelt floods are highly hazardous meteorological disasters that can potentially threaten human lives and property. Hence, snowmelt susceptibility mapping (SSM) plays an important role in flood prevention systems and aids emergency responders and flood risk managers. In this paper, a method of identifying snowmelt flood hazards is proposed, and a large‐scale snowmelt flood hazard zonation scheme based on historical recordings and multisource remote sensing data is established. To assess the quality of our approach, the proposed model was tested in the cold and arid region of Xinjiang, China. Overall, 140 historical snowmelt flood events and 27 explanatory factors were selected to construct a geospatial dataset for SSM of the contemporary period. GridSearchCV was used to comprehensively search the candidate parameters from the grid of given parameters obtained with the random forest (RF) algorithm. Then, the geospatial dataset was divided into two subsets: 70% for training and 30% for testing. Next, SSM results were obtained with the RF algorithm using optimized parameters. The results indicate that our optimized RF classifier performs well for the task of SSM, with a high AUC value (0.975) for the test dataset. The validation and analysis suggest that the proposed method can efficiently identify snowmelt flood hazards in undersampled arid areas at a regional scale.
- Published
- 2024
- Full Text
- View/download PDF
29. Methods and Applications of Full-Scale Field Testing for Large-Scale Circulating Fluidized Bed Boilers.
- Author
-
Dong, Zhonghao, Lu, Xiaofeng, Zhang, Rongdi, Li, Jianbo, Wu, Zhaoliang, Liu, Zhicun, Yang, Yanting, Wang, Quanhai, and Kang, Yinhu
- Subjects
- *
TWO-phase flow , *HEAT transfer , *BOILERS , *OXYGEN consumption , *CARBON emissions , *BURNUP (Nuclear chemistry) - Abstract
Circulating fluidized bed (CFB) boilers offer a technically viable and environmentally friendly means for the clean and efficient utilization of solid fuels. However, the complex gas–solid two-phase flow processes within them have hindered a thorough resolution of prediction issues related to coupled combustion, heat transfer, and pollutant generation characteristics. To address the deficiencies in scientific research, meet the practical operational needs of CFB boilers, and comply with new carbon emission policies, conducting full-scale field tests on large-scale CFB boilers is needed, so that the complex gas–solid flow, combustion, and heat transfer mechanisms in the furnace can be comprehended. In this paper, issues related to large-scale CFB boilers, including the uniformity of air distribution, secondary air injection range, spatial distribution of oxygen consumption and combustion reactions, distribution of pollutant generation, hydrodynamic and heat transfer characteristics, coal feeding distribution characteristics, coal diffusion characteristics under thermal operating conditions, and engineering research on anti-wear technology, are reviewed. By integrating practical engineering applications, the basic methods and measurement techniques used in full-scale field tests for large-scale CFB boilers are summarized, providing a practical reference for conducting engineering tests with large-scale CFB boilers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A multi-step relay implementation of the successive iteration of analysis and design method for large-scale natural frequency-related topology optimization.
- Author
-
Shi, Lin, Li, Jing, Liu, Pai, Zhu, Yixiao, and Kang, Zhan
- Subjects
- *
TOPOLOGY , *STRUCTURAL design , *DEGREES of freedom , *PROBLEM solving , *EIGENVALUES - Abstract
Large-scale natural frequency-related topology optimization problems pose a great challenge due to the high computational cost required by the iterative computation of the system eigenvalues and their sensitivities in each design iteration. This paper presents a new framework based on a multi-step relay strategy in conjunction with the idea of successive iteration of analysis and design (SIAD), to find high-quality solutions at affordable computational costs. The method starts from a relatively coarse finite element discretization, and then use gradually refined meshes to improve the design resolution. For a given level of mesh resolution, the method interleaves the eigenvalue solution routine with the optimization iterations to achieves sequential approximations of the eigenpairs along with the structural design evolution, thus avoiding the time-consuming eigenpair analysis in each optimization iteration. By sequentially solving the optimization problem and projecting intermediate designs and the corresponding approximate eigenmodes from a coarser mesh onto finer meshes, the proposed multi-step relay method can further substantially alleviate the computational burden and generate high-resolution boundaries in the final design. Numerical examples show that this method can be used to solve natural frequency maximization topology optimization problems with millions of degrees of freedom on a desktop workstation, and is much more efficient than the conventional double-loop method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Automated Nanodroplet Dispensing for Large-Scale Spheroid Generation via Hanging Drop and Parallelized Lossless Spheroid Harvesting.
- Author
-
Zieger, Viktoria, Woehr, Ellen, Zimmermann, Stefan, Frejek, Daniel, Koltay, Peter, Zengerle, Roland, and Kartmann, Sabrina
- Subjects
MICROPLATES ,CELL culture ,DISEASE progression ,PROOF of concept ,WORKFLOW - Abstract
Creating model systems that replicate in vivo tissues is crucial for understanding complex biological pathways like drug response and disease progression. Three-dimensional (3D) in vitro models, especially multicellular spheroids (MCSs), offer valuable insights into physiological processes. However, generating MCSs at scale with consistent properties and efficiently recovering them pose challenges. We introduce a workflow that automates large-scale spheroid production and enables parallel harvesting into individual wells of a microtiter plate. Our method, based on the hanging-drop technique, utilizes a non-contact dispenser for dispensing nanoliter droplets of a uniformly mixed-cell suspension. The setup allows for extended processing times of up to 45 min without compromising spheroid quality. As a proof of concept, we achieved a 99.3% spheroid generation efficiency and maintained highly consistent spheroid sizes, with a coefficient of variance below 8% for MCF7 spheroids. Our centrifugation-based drop transfer for spheroid harvesting achieved a sample recovery of 100%. We successfully transferred HT29 spheroids from hanging drops to individual wells preloaded with collagen matrices, where they continued to proliferate. This high-throughput workflow opens new possibilities for prolonged spheroid cultivation, advanced downstream assays, and increased hands-off time in complex 3D cell culture protocols. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Large‐scale synthesis of CuS nanoparticles for photothermal materials using high‐concentration Cu complex ion precursor.
- Author
-
Jeon, Hee Yeon, Ryu, Cheol‐Hui, Han, Seungheon, Lee, Dong Hoon, Byun, Jongmin, and Lee, Young‐In
- Subjects
- *
COMPLEX ions , *COPPER , *NANOPARTICLES , *P-type semiconductors , *COPPER sulfide , *METAL sulfides - Abstract
Copper sulfide (CuS), a copper‐deficient p‐type semiconductor material, has been widely utilized due to its unique optical properties, low toxicity, and cost‐effectiveness. Although many studies have been conducted on synthesizing CuS nanoparticles, harsh synthetic conditions and low yield must be solved. This study presents a new methodology that can synthesize CuS nanoparticles in large quantities at room temperature and pressure using high‐concentration Cu complex ion precursors. This methodology is based on the theory that the critical nucleus radius and the critical nucleation free energy decrease as the concentration of the precursor increases to synthesize a large number of nanoparticles by applying low energy. In addition, it is possible to minimize the aggregation of nanoparticles, which is a problem of nanoparticles synthesized at a high precursor concentration through complex ions in the solution. We synthesized nanoparticles by controlling the precursor concentration from 0.1 to 2.5 M to confirm the effect of the precursor concentration on the size, shape, and yield of nanoparticles. As the precursor concentration increased, the particle size decreased, and the yield improved. The CuS nanoparticles synthesized at the highest concentration had a size of about 17 nm without a strong agglomeration and a yield of about 213.9 g/L. Furthermore, the nanoparticles showed excellent photothermal performance due to their high near‐infrared absorption. When about 0.1 g of the nanoparticles were irradiated with a Xenon lamp and an 808 nm laser, the maximum temperatures and rising rates were 53.7°C and 172.1°C and 13.8°C/mg and 33°C/mg, respectively. The excellent photothermal properties of CuS nanoparticles suggest the potential of this material for various applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Editorial: Large scale coastal processes and their interactions with changing coastal environments
- Author
-
Iacopo Carnacina, Ming Li, and Nicoletta Leonardi
- Subjects
large scale ,coastal ,morphology ,data analisys ,coastal eco-environment ,Science ,General. Including nature conservation, geographical distribution ,QH1-199.5 - Published
- 2024
- Full Text
- View/download PDF
34. A novel habitat adaptability evaluation indicator (HAEI) for predicting yield of county-level winter wheat in China base on multisource climate data from 2001 to 2020
- Author
-
Xiaobin Xu, Wei He, and Hongyan Zhang
- Subjects
Adaptability evaluation ,Yield prediction ,Climate data ,Remote sensing ,Large scale ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
The effective collaboration between crop habitat adaptability evaluation and yield prediction is of great significance. Climate factors occupy an indispensable role, but the contributions of different climate factors to crop growth present spatiotemporal heterogeneity and confusion, amplifying the challenges associated with large-scale dynamic evaluations. Additionally, the mounting input parameters in yield prediction compound the uncertainty and intricacy of modeling. To address these challenges, a climate-driven dynamic habitat adaptability evaluation indicator (HAEI) was developed, capable of forecasting county-level winter wheat yields in China. First, the distribution characteristics and matching relationship between climate and yield variability were explored from multi-source data in the long time series, and a novel method of multiple-factor adaptive matching habitat membership degree was proposed. Second, considering the interaction and contribution differences between multiple-factor at different phenological periods, a comprehensive HAEI suitable for the entire growth period of winter wheat is constructed. The results showed that HAEI can integrate climate information that has a greater impact on yield variability and has a significant correlation with yield in various regions and periods, with an average correlation of 0.70. Remarkably, the predictive models incorporating HAEI consistently outperformed other yield prediction algorithms, demonstrating superior accuracy (R2 = 0.62–0.76, nRMSE = 0.1517–0.2031). Even in the least favorable scenario, involving a linear model with HAEI input, satisfactory results were achieved. This comprehensive framework effectively mitigates the adverse consequences of widespread agricultural climate heterogeneity and evaluates the habitat adaptability and yield status of wheat at the county level in China.
- Published
- 2023
- Full Text
- View/download PDF
35. Large-Scale Group Decision-Making Method Using Hesitant Fuzzy Rule-Based Network for Asset Allocation.
- Author
-
Yaakob, Abdul Malek, Shafie, Shahira, Gegov, Alexander, Rahman, Siti Fatimah Abdul, and Khalif, Ku Muhammad Naim Ku
- Subjects
- *
GROUP decision making , *ASSET allocation , *SOCIAL network analysis , *MULTICASTING (Computer networks) , *RESEARCH personnel , *HESITATION - Abstract
Large-scale group decision-making (LSGDM) has become common in the new era of technology development involving a large number of experts. Recently, in the use of social network analysis (SNA), the community detection method has been highlighted by researchers as a useful method in handling the complexity of LSGDM. However, it is still challenging to deal with the reliability and hesitancy of information as well as the interpretability of the method. For this reason, we introduce a new approach of a Z-hesitant fuzzy network with the community detection method being put into practice for stock selection. The proposed approach was subsequently compared to an established approach in order to evaluate its applicability and efficacy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. An Investigation on the Teeth Crowning Effects on the Transient EHL Performance of Large-Scale Wind Turbine Spur Gears.
- Author
-
Jamali, Hazim U., Aljibori, H. S. S., Jweeg, Muhsin Jaber, Abdullah, Oday I., and Ruggiero, Alessandro
- Subjects
SPUR gearing ,WIND turbines ,TEETH ,STRESS concentration ,SURFACE pressure - Abstract
Crowning is applied to wind turbine gears, including spur gears, to ensure adequate stress distribution and contact localization in wind turbine main gearbox gears to improve the gear performance in the presence of misalignments. Each gear tooth is crowned along the face width using a parabolic curve that graduates from a maximum height at the edges and vanishes at the center of the tooth flank. This crowning transfers the elastohydrodynamic contact problem from a line to a point contact case where the surface curvatures and pressure gradient are considered in both directions of the solution space. A wide range of longitudinal crowning heights is considered in this analysis under heavily loaded teeth for typical large-scale wind turbine gears. Furthermore, the variation in the velocities is considered in the analysis. The full transient elastohydrodynamic point contact solution considers the non-Newtonian oil behavior, where the numerical solution is based on the finite difference method. This work is focused on the evaluation of the effectiveness of teeth's longitudinal crowning in terms of the consequences on the resulting pressure distribution and the corresponding film thickness. The modification of the tooth flank significantly elevates the film thickness levels over the zones close to the tooth edges without a significant increase in the pressure values. Moreover, the zone close to the tooth edges from both sides, where the pressure is expected to drop to the ambient pressure, is extended as a result of the flank modification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Multiagent Coordination and Teamwork: A Case Study for Large-Scale Dynamic Ready-Mixed Concrete Delivery Problem.
- Author
-
Hanif, Shaza, Din, Shahab Ud, Gui, Ning, and Holvoet, Tom
- Subjects
- *
MULTIAGENT systems , *NP-hard problems , *BIOLOGICALLY inspired computing , *CONCRETE , *COMPOSITE columns - Abstract
The ready-mixed concrete delivery (RMC) problem is a scheduling problem, where multiple trucks deliver concrete to order sites abiding by hard constraints in a dynamic environment. It is an NP-hard problem, impractical to solve using exhaustive methods. Thus, it requires heuristic-based approaches for generating sub-optimal schedules. Due to its distributed nature, we address this problem using a decentralised, scalable, cooperative MAS (multiagent system) that dynamically generates schedules. We explore the impact of teamwork by trucks on schedule optimisation. This work illustrates two novel approaches that address the dynamic RMC problem; a Delegate MAS approach and a team-extended approach. We present an empirical study, comparing our novel approaches with existing ones. The evaluation is performed by classifying the RMC case study scenarios into unique stress, scale, and dynamism characteristics. With 40% to 70% improvement over different metrics, the results show that both approaches generate better schedules, and using agent teams augments the performance. Thus, such decentralized MAS with the appropriate coordination approach and teamwork can be used for solving constrained dynamic scheduling problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Balancing the Average Weighted Completion Times in Large-Scale Two-Agent Scheduling Problems: An Evolutionary-Type Computational Study.
- Author
-
Avolio, Matteo
- Subjects
- *
ASSEMBLY line balancing , *GENETIC algorithms , *NP-hard problems , *ASSIGNMENT problems (Programming) - Abstract
The problem of balancing the average weighted completion times of two classes of jobs is an NP-hard scheduling problem that was very recently introduced in the literature. Interpreted as a cooperative-type two-agent single-machine problem, its applications are in various practical contexts such as in logistics for balancing the delivery times, in manufacturing for balancing the assembly lines and in services for balancing the waiting times of groups of people. The only solution technique currently existing in the literature is a Lagrangian heuristic, based on solving a finite number of successive linear assignment problems, whose dimension depends on the total number of jobs. Since the Lagrangian approach has not appeared to be particularly suitable for solving large-scale problems, to overcome this drawback, we propose to face the problem by means of a genetic algorithm. Differently from the Lagrangian heuristic, our approach is found to be effective also for large instances (up to 2000 jobs), as confirmed by numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Fast newton method to solve KLR based on multilevel circulant matrix with log-linear complexity.
- Author
-
Zhang, Junna, Zhou, Shuisheng, Fu, Cui, and Ye, Feng
- Subjects
CIRCULANT matrices ,NEWTON-Raphson method ,TIME complexity ,FAST Fourier transforms ,COMPUTATIONAL complexity - Abstract
Kernel logistic regression (KLR) is a conventional nonlinear classifier in machine learning. With the explosive growth of data size, the storage and computation of large dense kernel matrices is a major challenge in scaling KLR. Even when the nyström approximation is applied to solve KLR, the corresponding method faces time complexity of O (nc 2) and space complexity of O (n c) , where n is the number of training instances and c is the sample size. We propose a fast Newton method to efficiently solve large-scale KLR problems by exploiting the storage and computing advantages of a multilevel circulant matrix (MCM). By approximating the kernel matrix with an MCM, the storage space is reduced to O (n) , and further approximating the coefficient matrix of the Newton equation as an MCM, the computational complexity of Newton iteration is reduced to O (n log n) . The proposed method can run in log-linear time complexity per iteration, because the multiplication of an MCM (or its inverse) and a vector can be implemented by the multidimensional fast Fourier transform (mFFT). Experimental results on some large-scale binary- and multi-classification problems show that the proposed method enables KLR to scale to large scale problems with less memory consumption and less training time without sacrificing test accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. EasySSR: a user-friendly web application with full command-line features for large-scale batch microsatellite mining and samples comparison.
- Author
-
Aguiar Alves, Sandy Ingrid, Costa Ferreira, Victor Benedito, Dias Dantas, Carlos Willian, da Costa da Silva, Artur Luiz, and Jucá Ramos, Rommel Thiago
- Subjects
WEB-based user interfaces ,MOLECULAR biology ,SHORT tandem repeat analysis ,BASE pairs ,MICROSATELLITE repeats ,COMPARATIVE genomics ,NANOSATELLITES ,MICROARRAY technology - Abstract
Microsatellites, also known as SSRs or STRs, are polymorphic DNA regions with tandem repetitions of a nucleotide motif of size 1-6 base pairs with a broad range of applications in many fields, such as comparative genomics, molecular biology, and forensics. However, the majority of researchers do not have computational training and struggle while running command-line tools or very limited web tools for their SSR research, spending a considerable amount of time learning how to execute the software and conducting the post-processing data tabulation in other tools or manually--time that could be used directly in data analysis. We present EasySSR, a user-friendly web tool with command-line full functionality, designed for practical use in batch identifying and comparing SSRs in sequences, draft, or complete genomes, not requiring previous bioinformatic skills to run. EasySSR requires only a FASTA and an optional GENBANK file of one or more genomes to identify and compare STRs. The tool can automatically analyze and compare SSRs in whole genomes, convert GenBank to PTT files, identify perfect and imperfect SSRs and coding and non-coding regions, compare their frequencies, abundancy, motifs, flanking sequences, and iterations, producing many outputs ready for download such as PTT files, interactive charts, and Excel tables, giving the user the data ready for further analysis in minutes. EasySSR was implemented as a web application, which can be executed from any browser and is available for free at https://computationalbiology.ufpa.br/easyssr/. Tutorials, usage notes, and download links to the source code can be found at https://github.com/engbiopct/EasySSR. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. A Novel Clustering Method Based on Adjacent Grids Searching.
- Author
-
Li, Zhimeng, Zhong, Wen, Liao, Weiwen, Zhao, Jian, Yu, Ming, and He, Gaiyun
- Subjects
- *
IMAGE segmentation , *INFORMATION retrieval - Abstract
Clustering is used to analyze the intrinsic structure of a dataset based on the similarity of datapoints. Its widespread use, from image segmentation to object recognition and information retrieval, requires great robustness in the clustering process. In this paper, a novel clustering method based on adjacent grid searching (CAGS) is proposed. The CAGS consists of two steps: a strategy based on adaptive grid-space construction and a clustering strategy based on adjacent grid searching. In the first step, a multidimensional grid space is constructed to provide a quantization structure of the input dataset. The noise and cluster halo are automatically distinguished according to grid density. Moreover, the adaptive grid generating process solves the common problem of grid clustering, in which the number of cells increases sharply with the dimension. In the second step, a two-stage traversal process is conducted to accomplish the cluster recognition. The cluster cores with arbitrary shapes can be found by concealing the halo points. As a result, the number of clusters will be easily identified by CAGS. Therefore, CAGS has the potential to be widely used for clustering datasets with different characteristics. We test the clustering performance of CAGS through six different types of datasets: dataset with noise, large-scale dataset, high-dimensional dataset, dataset with arbitrary shapes, dataset with large differences in density between classes, and dataset with high overlap between classes. Experimental results show that CAGS, which performed best on 10 out of 11 tests, outperforms the state-of-the-art clustering methods in all the above datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Slight religiosity associated with a lower incidence of any fracture among healthy people in a multireligious country
- Author
-
Daiki Kobayashi, Hironori Kuga, and Takuro Shimbo
- Subjects
Fracture ,Japan ,Large scale ,Longitudinal study ,Religion ,T score ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Abstract Background The aim of this study was to evaluate the association between the degree of religiosity and subsequent fractures and a decrease in bone mineral density in a Japanese population. Methods We conducted a retrospective longitudinal study at St. Luke’s International Hospital in Tokyo, Japan, from 2005 to 2018. All participants who underwent voluntary health check-ups were included. Our outcomes were any fractures and the change in T-score from baseline to each visit. We compared these outcomes by the self-reported degree of religiosity (not at all; slightly; somewhat; very) and adjusted for potential confounders. Results A total of 65,898 participants were included in our study. Their mean age was 46.2(SD:12.2) years, and 33,014(50.1%) were male. During a median follow-up of 2,500 days (interquartile range (IQR):987–3,970), 2,753(4.2%) experienced fractures, and their mean delta T-score was -0.03%(SD:18.3). In multivariable longitudinal analyses, the slightly religious group had a statistically lower adjusted odds ratio (AOR) for a fracture than the nonreligious group(AOR:0.81,95% confidence interval(CI):0.71 to 0.92). Conclusions We demonstrated that slightly religious people, but not somewhat or very religious people, had a lower incidence of fracture than nonreligious individuals, although the T-scores were similar regardless of the degree of religiosity.
- Published
- 2023
- Full Text
- View/download PDF
43. Landslide susceptibility assessment on a large scale in the Podsljeme area, City of Zagreb (Croatia)
- Author
-
Sanja Bernat Gazibara, Marko Sinčić, Martin Krkač, Hrvoje Lukačić, and Snježana Mihalić Arbanas
- Subjects
Landslide susceptibility modelling ,large scale ,bivariate statistical method ,LiDAR ,Croatia ,Maps ,G3180-9980 - Abstract
ABSTRACTThe study presents a landslide susceptibility assessment on a large scale in the City of Zagreb (Croatia). The susceptibility analysis was performed using the Weight of Evidence model in the pilot area (21 km2) and applying the obtained weight values for each class of conditioning factors in the study area (130 km2). The input data were LiDAR-based landslide inventory and six conditioning factors derived from 5 m LiDAR DTM, 5 m SfM DEM, and geological and land-use maps. The validation of the susceptibility assessment for the study area was evaluated with a ROC curve, which showed a high prediction rate (AUC = 84.4%), similar to the prediction rate for the pilot area (AUC = 86.9%). Based on the results, it can be concluded that the proposed method for large-scale landslide susceptibility assessment, where susceptibility conditions are defined in smaller pilot areas, can be applied to larger research areas with similar geomorphological and geological conditions.
- Published
- 2023
- Full Text
- View/download PDF
44. A New Schedule-Based Scheme for Uplink Communications in LoRaWAN
- Author
-
Chekra El Fehri, Nouha Baccour, and Ines Kammoun
- Subjects
Class A ,class B ,energy efficiency ,reliability ,large scale ,LoRaWAN ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
Long Range Wide Area Network (LoRaWAN) is currently one of the leading communication technologies for the Internet of Things (IoT) connectivity. It offers long-range and wide-area communication at low-power, low cost and low data rate. However, several studies demonstrated that LoRaWAN exhibits excessive collisions and thus performance degradation especially at large scale. This is mainly due to the ALOHA-based channel access technique adopted for uplink communications in LoRaWAN. In this work, we present a new schedule-based schema which allows a deterministic allocation of time, channel and spreading factor, for a collision-free uplink communication in LoRaWAN. For the sake of interoperability, our schema does not involve any additional synchronization phase for end-devices, or major changes in LoRaWAN specification as suggested in most of the existing studies in the literature. The performance evaluation of our proposed scheme proved an outstanding improvement of the network performance in terms of latency and energy-consumption. For instance, for an inter-packet transmission interval equal to 1800s (one packet each 30 min) and a cell size equal to 1000 end-devices, results show that using the proposed schedule-based schema, the uplink communication latency and energy consumption are reduced respectively by 89% and 78%, compared to the original LoRaWAN legacy class A.
- Published
- 2023
- Full Text
- View/download PDF
45. Spikes from sound : a model of the human auditory periphery on SpiNNaker
- Author
-
James, Robert, Garside, James, and Koch, Dirk
- Subjects
006.3 ,parallel computing ,large scale ,spiking neural networks ,cochlear modelling ,SpiNNaker ,auditory pathway ,neuromorphic hardware - Abstract
From a computational perspective much can be learned from studying the brain. For auditory processing three biological attributes are presented as being responsible for good hearing performance in challenging environments: Firstly, the scale of biological cell resource allocated to the sensory pathway and the cortical networks that processes auditory information. Secondly, the format that information is encoded in the brain of precisely timed spiking action potentials. Finally, the adaptation mechanisms generated by the descending feedback projections between regions of the brain involved in hearing. To further understand these attributes using simulation a digital model of the complete auditory pathway must be built; the scale of such a model requires that it is mapped onto a large parallel computer. The work presented in this thesis contributes towards this goal by developing a system that simulates the conversion of sound into spiking neural action potentials inside the ear and the subsequent processing of some auditory brain regions. This system is intentionally distributed across massively parallel neuromorphic SpiNNaker hardware to avoid large scale simulation performance penalties of conventional computer platforms when increasing the number of biological cells modelled. Performance analysis as scale varies highlights issues within the current methods used for simulating spiking neural networks on the SpiNNaker platform. The system presented in this thesis has the potential for expansion to simulate a complete model of the auditory pathway across a large SpiNNaker machine.
- Published
- 2020
46. Simulation Environment for Scalability and Performance Analysis in Hierarchically Organized IoT Systems
- Author
-
H Turkmanović, I. Popović, Z. Čiča, and D. Drajić
- Subjects
iot ,simulation framework ,large scale ,scalability ,Telecommunication ,TK5101-6720 - Abstract
The accelerated development of technologies, especially in the field of telecommunications, ease the integration of embedded devices within various IoT applications. Modern IoT applications assume heterogenous embedded platforms capable of collecting, processing, and exchanging data between the tiers of the IoT system architecture. Designing a multi-tier IoT system, even in the case of architecture that involves a small number of intelligent embedded devices, can be a very demanding process, especially when dealing with the strict requirements of IoT application concerning application performance, scalability, and energy consumption. In this paper, an open-source simulation framework for the performance analysis of an arbitrary multi-tiered IoT system is presented. Framework supports insight into the data availability within the tiers of IoT system enabling designers to evaluate the performance of IoT application and to engineer the system operation and deployment. Besides the performance analysis, proposed framework enables the analysis of energy consumption, architecture scalability utilizing different communication patterns and technologies. The case study of a large-scale IoT application for demonstrating the framework potential regarding the scalability and data availability analysis is also given.
- Published
- 2022
- Full Text
- View/download PDF
47. Highlights from the 1st European cancer dependency map symposium and workshop.
- Author
-
Trastulla, Lucia, Savino, Aurora, Beltrao, Pedro, Ciriano, Isidro Cortés, Fenici, Peter, Garnett, Mathew J., Guerini, Ilaria, Bigas, Nuria Lòpez, Mattaj, Iain, Petsalaki, Evangelia, Riva, Laura, Tape, Christopher J., Leeuwen, Jolanda Van, Sharma, Sumana, Vazquez, Francisca, and Iorio, Francesco
- Subjects
- *
POSTER presentations , *CONFERENCES & conventions , *MEDICAL screening , *DRUG discovery - Abstract
The systematic identification of tumour vulnerabilities through perturbational experiments on cancer models, including genome editing and drug screens, is playing a crucial role in combating cancer. This collective effort is known as the Cancer Dependency Map (DepMap). The 1st European Cancer Dependency Map Symposium (EuroDepMap), held in Milan last May, featured talks, a roundtable discussion, and a poster session, showcasing the latest discoveries and future challenges related to the DepMap. The symposium aimed to facilitate interactions among participants across Europe, encourage idea exchange with leading experts, and present their work and future projects. Importantly, it sparked discussions on future endeavours, such as screening more complex cancer models and accounting for tumour evolution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Dynamic Tikhonov State Forecasting Based on Large-Scale Deep Neural Network Constraints †.
- Author
-
Molina, Cristhian, Martinez, Juan, and Giraldo, Eduardo
- Subjects
BRAIN mapping ,ELECTROENCEPHALOGRAPHY ,INVERSE problems ,CONVOLUTIONAL neural networks ,TIME series analysis - Abstract
This work presents dynamic Tikhonov state forecasting based on large-scale deep neural network constraint for the solution to a dynamic inverse problem of electroencephalographic brain mapping. The dynamic constraint is obtained by using a large-scale deep neural network to approximate the dynamics of the state evolution in a discrete large-scale state-space model. An evaluation by using neural networks with several hidden layer configurations is performed to obtain the adequate structure for large-scale system dynamic tracking. The proposed approach is evaluated over two models of 2004 and 10,016 states in discrete time. The models are related to an electroencephalographic problem for EEG generation. A comparison analysis is performed by using static and dynamic Tikhonov approaches with simplified dynamic constraints. By considering the obtained results it can be concluded that the deep neural networks adequately approximate large-scale state dynamics by improving the dynamic inverse problem solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Effect of Gradation Characteristics and Particle Morphology on Internal Erosion of Sandy Gravels: A Large-Scale Experimental Study.
- Author
-
Deng, Zezhi, Chen, Xiangshan, Jin, Wei, and Wang, Gang
- Subjects
EROSION ,EARTH dams ,SOIL permeability ,GRAVEL ,PARTICULATE matter - Abstract
Internal erosion refers to the seepage-induced fine particle migration phenomenon in soil. Deep alluviums in valleys usually contain cohesionless gap-graded sandy gravels with poor internal stability. The construction of embankment dams on such alluviums could pose a high risk of internal erosion. This study systematically investigated the internal erosion of cohesionless gap-graded sandy gravels with an emphasis on the effects of gradation characteristics and particle morphology. A series of large-scale internal erosion tests were conducted on gap-graded sandy gravels with different gap ratios, fines contents, and coarse particle morphologies under the surcharge pressure of 1 MPa. The internal erosion characteristics, including soil permeability, eroded soil mass, and soil deformation during the erosion process were comparatively analyzed in combination with a meso-mechanism interpretation. The results show that the increase of the gap ratio can reduce the internal stability of soil and promote the mechanical instability. Fines content affected the permeability and internal stability of soil by altering the filling state of inter-granular pores and the constraints on fine particles. Coarse particles with higher roundness, sphericity, and smoothness can facilitate the movement of fine particles and promote the mechanical instability of the soil matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Large-Scale Production and Optical Properties of a High-Quality SnS 2 Single Crystal Grown Using the Chemical Vapor Transportation Method.
- Author
-
Tripathi, Prashant, Kumar, Arun, Bankar, Prashant K., Singh, Kedar, and Gupta, Bipin Kumar
- Subjects
SINGLE crystals ,CHEMICAL transportation ,OPTICAL properties ,OPTOELECTRONIC devices ,X-ray diffraction ,CHEMICAL reactions - Abstract
The scientific community believes that high-quality, bulk layered, semiconducting single crystals are crucial for producing two-dimensional (2D) nanosheets. This has a significant impact on current cutting-edge science in the development of next-generation electrical and optoelectronic devices. To meet this ever-increasing demand, efforts have been made to manufacture high-quality SnS
2 single crystals utilizing low-cost CVT (chemical vapor transportation) technology, which allows for large-scale crystal production. Based on the chemical reaction that occurs throughout the CVT process, a viable mechanism for SnS2 growth is postulated in this paper. Optical, XRD with Le Bail fitting, TEM, and SEM are used to validate the quality, phase, gross structural/microstructural analyses, and morphology of SnS2 single crystals. Furthermore, Raman, TXRF, XPS, UV–Vis, and PL spectroscopy are used to corroborate the quality of the SnS2 single crystals, as well as the proposed energy level diagram for indirect transition in the bulk SnS2 single crystals. As a result, the suggested method provides a cost-effective method for growing high-quality SnS2 single crystals, which could lead to a new alternative resource for producing 2D SnS2 nanosheets, which are in great demand for designing next-generation optoelectronic and quantum devices. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.