29 results on '"Yao, Jianguo"'
Search Results
2. HAVANA: Hard Negative Sample-Aware Self-Supervised Contrastive Learning for Airborne Laser Scanning Point Cloud Semantic Segmentation.
- Author
-
Zhang, Yunsheng, Yao, Jianguo, Zhang, Ruixiang, Wang, Xuying, Chen, Siyang, and Fu, Han
- Subjects
- *
POINT cloud , *AIRBORNE-based remote sensing , *AIRBORNE lasers , *OPTICAL scanners , *MACHINE learning - Abstract
Deep Neural Network (DNN)-based point cloud semantic segmentation has presented significant breakthrough using large-scale labeled aerial laser point cloud datasets. However, annotating such large-scaled point clouds is time-consuming. Self-Supervised Learning (SSL) is a promising approach to this problem by pre-training a DNN model utilizing unlabeled samples followed by a fine-tuned downstream task involving very limited labels. The traditional contrastive learning for point clouds selects the hardest negative samples by solely relying on the distance between the embedded features derived from the learning process, potentially evolving some negative samples from the same classes to reduce the contrastive learning effectiveness. This work proposes a hard-negative sample-aware self-supervised contrastive learning algorithm to pre-train the model for semantic segmentation. We designed a k-means clustering-based Absolute Positive And Negative samples (AbsPAN) strategy to filter the possible false-negative samples. Experiments on two typical ALS benchmark datasets demonstrate that the proposed method is more appealing than supervised training schemes without pre-training. Especially when the labels are severely inadequate (10% of the ISPRS training set), the results obtained by the proposed HAVANA method still exceed 94% of the supervised paradigm performance with full training set. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. MDev-NVMe: Mediated Pass-Through NVMe Virtualization Solution With Adaptive Polling.
- Author
-
Peng, Bo, Yao, Jianguo, Dong, Yaozu, and Guan, Haibing
- Subjects
- *
CLOUD storage , *PARALLEL processing , *CLOUD computing , *SERVER farms (Computer network management) , *DATA warehousing , *NONVOLATILE memory , *SCALABILITY - Abstract
The fast access to data and high parallel processing in high-performance computing instigates an urgent demand on the improvement of the NVMe storage within modern data centers. However, the former NVMe virtualization’s unsatisfactory performance demonstrates that NVMe devices are often underutilized within cloud platforms. An NVMe virtualization mechanism with high performance and device sharing has captured researchers and developers’ attention. This article introduces MDev-NVMe, a new virtualization solution for NVMe storage device with (1) full NVMe storage virtualization for VMs running native NVMe driver, (2) a mediated pass-through mechanism for NVMe management, and (3) adaptive configuration of active polling optimization to simultaneously achieve high throughput, low latency performance, and substantial device scalability. We practically implement the MDev-NVMe as a Linux kernel module. This article subsequently evaluates MDev-NVMe with Intel OPTANE and P3600 SSD by comparing several mainstream NVMe virtualization mechanisms using application-level I/O benchmarks. MDev-NVMe with active polling can demonstrate a 142 percent improvement over native (interrupt-driven) throughput and over 2.5 × the Virtio throughput with only 70 percent native average latency and 31 percent Virtio average latency. Finally, the advantages of MDev-NVMe and the importance of adaptive polling are discussed, offering evidence that MDev-NVMe is a superior virtualization choice for cloud storage. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Algebraic Construction of Optimal Frequency Hopping Patterns Based on Welch Costas Arrays.
- Author
-
Yao, Jianguo, Jiang, Rui, and Heng, Wei
- Subjects
- *
PATTERNMAKING , *FINITE fields , *CONSTRUCTION , *MATHEMATICAL models - Abstract
This paper systematically expounded the theory of optimal frequency hopping patterns based on Welch Costas arrays with 1-gap row, a theory established after studying the properties and mathematical models of the frequency hopping patterns obtained by making two-dimensional cyclic shift to Welch Costas arrays. The algebraic construction and the autocorrelation and cross-correlation properties of the Welch Costas arrays with 1-gap row were studied and certain theorems were proved. The optimal frequency hopping patterns will have ideal autocorrelation and cross-correlation when designed by means of Welch Costas arrays with 1-gap row. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Adaptive Power Management through Thermal Aware Workload Balancing in Internet Data Centers.
- Author
-
Yao, Jianguo, Guan, Haibing, Luo, Jianying, Rao, Lei, and Liu, Xue
- Subjects
- *
SERVER farms (Computer network management) , *ENERGY consumption , *CLOUD computing , *CLIENT/SERVER computing equipment , *AIR conditioning equipment , *COMPUTER rooms - Abstract
The past decade witnessed the tremendous growth of online services and applications. Together with the increase of cloud computing, more and more computation are hosted by Internet data centers (IDCs). Today’s IDCs are achieving significant advances in communication and computation capabilities. However, along with the increasing demand from IDC clients, power consumption for powering up and cooling these IDCs has been skyrocketing. Most existing works optimize the power consumption of either servers or Computer Room Air Conditioners (CRACs), and overlook the correlation between the power consumption of these two types of equipment. In this paper, we propose an adaptive power control method which leverages the correlation between the power consumption of servers and CRACs. To capture the workload uncertainties and thermal dynamics, we exploit Recursive-Least Square based Model Predictive Control (MPC) to solve the power control problem. Performance evaluations shows the effective power peak reduction using our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. COMIC: Cost Optimization for Internet Content Multihoming.
- Author
-
Yao, Jianguo, Zhou, Haihang, Luo, Jianying, Liu, Xue, and Guan, Haibing
- Subjects
- *
INTERNET content , *CONTENT delivery networks , *CLOUD computing , *DISTRIBUTED computing , *DISTRIBUTED databases , *INTERNET servers , *INDUSTRIAL costs - Abstract
Content service is a type of Internet cloud service that provides end-users plentiful contents. To ensure high performance for content delivering, content service utilizes a technology known as content multihoming: contents are generated from multiplegeographically distributed data centers and delivered by multiple distributed content distribution networks (CDNs). The electricity costs for data centers and the usage costs for CDNs are major contributors to the contents service cost. As electricity prices vary across data centers and usage costs vary across CDNs, scheduling data centers and CDNs has a tremendous consequence for optimizing content service cost. In this paper, we propose a novel framework named Cost Optimization for Internet Content Multihoming (COMIC). COMIC dynamically balances end-users’ loads among data centers and CDNs so as to minimize the content service cost. Using real-lifeelectricity prices and CDN traces, the experiments demonstrate that COMIC effectively reduces the content service cost by more than 20 percent. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
7. Power Admission Control With Predictive Thermal Management in Smart Buildings.
- Author
-
Yao, Jianguo, Costanzo, Giuseppe Tommaso, Zhu, Guchuan, and Wen, Bin
- Subjects
- *
INTELLIGENT building equipment , *THERMAL management (Electronic packaging) , *THERMAL comfort , *THERMAL efficiency , *ELECTRIC power consumption management - Abstract
This paper presents a control scheme for thermal management in smart buildings based on predictive power admission control. This approach combines model predictive control with budget-schedulability analysis in order to reduce peak power consumption as well as ensure thermal comfort. First, the power budget with a given thermal comfort constraint is optimized through budget-schedulability analysis which amounts to solving a constrained linear programming problem. Second, the effective peak power demand is reduced by means of the optimal scheduling and cooperative operation of multiple thermal appliances. The performance of the proposed control scheme is assessed by simulation based on the thermal dynamics of a real eight-room office building located at Danish Technical University. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
8. vGASA: Adaptive Scheduling Algorithm of Virtualized GPU Resource in Cloud Gaming.
- Author
-
Zhang, Chao, Yao, Jianguo, Qi, Zhengwei, Yu, Miao, and Guan, Haibing
- Subjects
- *
VIDEO games , *CLOUD computing , *VIRTUAL reality , *GRAPHICS processing units , *SCHEDULING software , *COMPUTER algorithms , *FEEDBACK control systems - Abstract
As the virtualization technology for GPUs matures, cloud gaming has become an emerging application among cloud services. In addition to the poor default mechanisms of GPU resource sharing, the performance of cloud games is inevitably undermined by various runtime uncertainties such as rendering complex game scenarios. The question of how to handle the runtime uncertainties for GPU resource sharing remains unanswered. To address this challenge, we propose vGASA, a virtualized GPU resource adaptive scheduling algorithm in cloud gaming. vGASA interposes scheduling algorithms in the graphics API of the operating system, and hence the host graphic driver or the guest operating system remains unmodified. To fulfill the service level agreement as well as maximize GPU usage, we propose three adaptive scheduling algorithms featuring feedback control that mitigates the impact of the runtime uncertainties on the system performance. The experimental results demonstrate that vGASA is able to maintain frames per second of various workloads at the desired level with the performance overhead limited to 5-12 percent. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
9. Online adaptive utilization control for real-time embedded multiprocessor systems
- Author
-
Yao, Jianguo, Liu, Xue, Gu, Zonghua, Wang, Xiaorui, and Li, Jian
- Subjects
- *
ADAPTIVE control systems , *EMBEDDED computer systems , *MULTIPROCESSORS , *COMPUTER scheduling , *MEASUREMENT errors , *LEAST squares software , *SIMULATION methods & models - Abstract
Abstract: Many embedded systems have stringent real-time constraints. An effective technique for meeting real-time constraints is to keep the processor utilization on each node at or below the schedulable utilization bound, even though each task’s actual execution time may have large uncertainties and deviate a lot from its estimated value. Recently, researchers have proposed solutions based on Model Predictive Control (MPC) for the utilization control problem. Although these approaches can handle a limited range of execution time estimation errors, the system may suffer performance deterioration or even become unstable with large estimation errors. In this paper, we present two online adaptive optimal control techniques, one is based on Recursive Least Squares (RLS) based model identification plus Linear Quadratic (LQ) optimal controller; the other one is based on Adaptive Critic Design (ACD). Simulation experiments demonstrate both the LQ optimal controller and ACD-based controller have better performance than the MPC-based controller and the ACD-based controller has the smallest aggregate tracking errors. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
10. Proactive coordination for low-congestion multi-path datacenter networks.
- Author
-
Peng, Bo, Yao, Jianguo, Xu, Xin, and Guan, Haibing
- Subjects
- *
SERVER farms (Computer network management) , *BANDWIDTHS - Abstract
Modern datacenters involve a rich mix of workloads, each of which puts forward different service-level objective, including high throughput, low latency, etc. Currently, most datacenters introduce statistical multiplexing technology and oversubscription to the network design to lower the total cost, which can easily lead to the occurrence of network congestion, especially when the network is highly occupied by throughput-intensive workloads. This paper describes ProCAM, a proactive congestion avoidance mechanism for datacenter networks. As throughput-intensive flows are the chief culprit of network congestion, ProCAM adapts the multi-path routing to control transmission bandwidth and utilizes the predictability of throughput-intensive flows to prearrange optimal coordinate scheme (desynchronize the sending time of concurrent long-flows) beforehand as a proactive manner, by solving the low-congestion transmission model which minimizes network-wide host-to-host transmission latency from a global perspective. In this way, queue length in buffers can be kept at a low level and the performance of latency-sensitive flows can be guaranteed. In the evaluation experiments based on simulation with Mininet and SDN controller Ryu, the extensive simulations show that the proposed ProCAM can achieve high throughput with nearly zero packet loss and low latency when the network is highly occupied. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. A Throughput-Oriented NVMe Storage Virtualization With Workload-Aware Management.
- Author
-
Peng, Bo, Yang, Ming, Yao, Jianguo, and Guan, Haibing
- Subjects
- *
STORAGE , *CLOUD storage , *ELECTRONIC data processing , *RESOURCE management , *NONVOLATILE memory , *SCALABILITY - Abstract
Storage virtualization is an important component of large-scale online services in multi-tenant clouds. It typically shares the physical storage among guest machines and performs transactional operations for high-performance data processing. However, even with the recent mediated pass-through virtualization optimization, the operations of multi-tenant storage I/O meet the bottleneck, and thus degrade the throughput performance of the cloud storage services. We observe that the root cause of the problem is the unawareness of varying and imbalanced workload inefficiency of resource management in the multi-tenant cloud storage setting. In this paper, we present FinNVMe, a new throughput-oriented NVMe storage virtualization management mechanism, that (1) passes-through I/O performance-critical resources and emulates privileged resources to provide high throughput in a workload-aware manner among multi-tenant VMs, (2) enables fine-grained scheduling for I/O resources to achieve promising flexibility and scalability with respective to virtualization, and (3) adopts the queue binding and the queue shuffling to reduce the virtualization and management overhead, and involves active polling for further I/O acceleration. This article subsequently evaluates FinNVMe with micro benchmarks on two typical scenarios (both balanced and imbalanced workload) and the real-world storage workloads to show its high throughput performance, along with the flexibility and scalability of virtualization and resource management. For example, FinNVMe achieves up to 20 percent throughput improvement with more stable latency in the varying and imbalanced workload. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. A distributed optimal reactive power flow for global transmission and distribution network.
- Author
-
Zhao, Jinquan, Zhang, Zhenwei, Yao, Jianguo, Yang, Shengchun, and Wang, Ke
- Subjects
- *
REACTIVE power , *ELECTRIC generators , *ELECTRIC power transmission , *ELECTRIC power distribution , *ELECTRIC power systems - Abstract
With the large-scale integration of distributed generators (DGs), the relationships between transmission network (TN) and distribution networks (DNs) are becoming more closely, especially in the aspects of reactive power and voltage. Optimal reactive power flow (ORPF) problems for TN and DNs are not suitable to be solved independently. Due to the fact that TN and DNs are operated by different control centers, a heterogeneous decomposition (HGD) based distributed ORPF method for global transmission and distribution (T&D) networks is presented in this paper. Considering the characteristics of the master–slave structure, the global T&D-ORPF problem can be decomposed into the TN-ORPF master sub-problem, the DN-ORPF slave sub-problems and the boundary consistency coordination sub-problem. Any of those optimization algorithms based on duality and gradient theory can be adopted to solve the TN-ORPF or DN-ORPF sub-problems. By incorporating a penalty function into optimization algorithm, the discrete control variables can successively discretized and therefore the augmented Lagrangian functions are differentiable. Boundary sensitivities constructed by the boundary dual multipliers are used to decouple the global ORPF problem of T&D networks. The same solution as the centralized optimization is achieved by transferring the boundary variables and sensitivities between the TN control center and DN control centers. The simulation results of IEEE 30-bus (TN) and modified IEEE 33-bus (DN) test systems containing multiple DGs show that the HGD algorithm is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Effects of Cranioplasty on Contralateral Subdural Effusion After Decompressive Craniectomy: A Literature Review.
- Author
-
Zhou, Wu, Wang, Zhihua, Zhu, Huaxin, Xie, Zhiping, Zhao, Yeyu, Li, Chengcai, Xie, Shenke, Luo, Jilai, Li, Meihua, and Yao, Jianguo
- Subjects
- *
DECOMPRESSIVE craniectomy , *EXUDATES & transudates , *CEREBROSPINAL fluid shunts , *OTITIS media with effusion - Abstract
Contralateral subdural effusion (CSE) after decompressive craniectomy (CSEDC) is occasionally observed. Cranioplasty is routinely performed for reconstruction and has recently been associated with improving contralateral subdural effusion. We sought to systematically review all available literature and evaluate the effectiveness of cranioplasty for CSE. A PubMed, Web of Science, and Google Scholar search was conducted for preferred reporting items following the guidelines of systematic review and meta-analysis, including studies reporting patients who underwent cranioplasty because of CSEDC. The search yielded 8 articles. A total of 56 patients ranging in age from 21 to 71 years developed CSEDC. Of them, 32 patients underwent cranioplasty. Eighteen cases with symptomatic CSE underwent cranioplasty alone, 2 cases received Ommaya drainage later because of a recurrence of CDC, and 1 case underwent a ventriculoperitoneal shunt because the CSE did not resolve completely and the ventricle was dilated again. The symptoms of 14 cases lessened without recurrence after simultaneous cranioplasty and drainage or a shunt. The total success rate (CSE disappeared without recurrence) was 90.6% for patients who underwent cranioplasty; however, the total incidence of hydrocephalus was 40.1%. This review suggests that cranioplasty is effective for the treatment of CSEDC, particularly intractable cases, but early cranioplasty may be more effective. In addition, hydrocephalus is fairly common after cranioplasty and requires further treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. A GAN-Based Fully Model-Free Learning Method for Short-Term Scheduling of Large Power System.
- Author
-
Guan, Jinyu, Tang, Hao, Wang, Jiye, Yao, Jianguo, Wang, Ke, and Mao, Wenbo
- Subjects
- *
SCHEDULING , *GENERATIVE adversarial networks - Abstract
In order to reduce the dependence on accumulated experience that is difficult to replicate for the human dispatchers to make power generation scheduling more automatically, quickly, and intelligently, higher requirements are placed on the level of the auxiliary decision-making system. In this paper, a short-term scheduling problem was treated as a regression task that concentrates on how to learn a reliable statistic model that concludes the intrinsic logic of the dispatching policy from extensive historical dispatching experience. In this way, since Kullback-Leibler distance can better measure the distance between two distributions, we designed a novel GAN-based learning method for the scheduling task. Also, we proposed a feasible framework that combines the stage of learning, decision-making, and deployment to support the practical implementation of the proposed algorithm. In the experiment, a real case that takes short-term scheduling tasks on a regional large-scale power system is considered in our experiments. As a result, the comparison of several methods further shows the superiority of the proposed GAN-based method under our implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. A network scheme for process bus in smart substations without using external synchronization.
- Author
-
Zhao, Jiaqing, Qian, Kejun, Yao, Jianguo, Wang, Shouding, Yang, Zhihong, Gao, Zonghe, Ding, Hongen, Yang, Zhixin, Su, Dawei, Li, Huiqun, Xu, Chunlei, Huo, Xuesong, and Yang, Hong
- Subjects
- *
SMART power grids , *ELECTRIC substations , *SYNCHRONIZATION , *SCHEME programming language , *RELIABILITY in engineering , *ETHERNET - Abstract
The development of smart substations plays a crucial role in the development of smart grid. The major difference between a smart substation and an existing conventional substation lies in the process bus. Based on a thorough study over the network schemes of sampled values for process bus in smart substations, this paper summarizes and analyses major characteristics of existing network schemes. In order to overcome shortcomings of existing network schemes in time synchronization, the paper proposes a novel network scheme for process bus in smart substations without using external synchronizing clock. Compared to conventional network schemes, the proposed scheme does not need an external global synchronizing clock, thus can significantly improve the reliability of the process bus network. As a major research output of the key technological project, this work has been successfully applied to the 110 kV Shenxiang Substation in Suzhou, China. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. 4‐channel 35 Gbit/s parallel CMOS LDD.
- Author
-
Chen, Yingmei, Gong, Jianwei, Yao, Jianguo, and Tian, Ling
- Abstract
The design of a 4‐channel 35 Gbit/s parallel laser diode driver (LDD) using 65 nm CMOS technology is presented. The LDD driver consists of an input buffer stage, a pre‐amplifier stage and an output driver stage. The three‐stage cascaded amplifiers constitute the pre‐amplifier stage, and an active feedback technique is employed to expand bandwidth without consuming a large area. The output driver stage introduces RC negative feedback and inductive shunt peaking techniques to broaden the bandwidth. Measurement results show that the operating rate of each channel is up to 35 Gbit/s and the power consumption per data rate is only 4.4 mW/(Gbit/s). [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
17. Multi-objective optimization of synergic energy conservation and CO2 emission reduction in China's iron and steel industry under uncertainty.
- Author
-
Wang, Yihan, Wen, Zongguo, Yao, Jianguo, and Doh Dinga, Christian
- Subjects
- *
IRON industry , *STEEL industry , *CARBON dioxide , *LATIN hypercube sampling , *METRIC spaces , *ENERGY conservation - Abstract
Industrial energy conservation and CO 2 emission reduction (ECCER) management is a multi-objective optimization problem with multiple uncertainty factors. However, most studies have used deterministic optimization approaches, and neglected the uncertainty factors that affect the effectiveness of the management strategies. This study adopts a multi-objective optimization model under uncertainty to solve the ECCER management problem in China's iron and steel industry. Three objectives: minimum energy intensity, maximum CO 2 emission reduction, and minimum cost, are optimized simultaneously. This study simulates the perturbation of the uncertainty parameters within their fluctuation ranges via Latin Hypercube Sampling, adopts the mean objective function value mechanism to calculate the objective value, and obtains the optimized results using the second generation of the Non-dominated Sorting Genetic Algorithm (NSGA-II). Lastly, this study sets three types of preferences to generate final decision strategies via a Vague set-based approach. Results show: (1) The algorithm is reliable as per the verification of Hypervolume indicator and Spacing Metric; (2) The average values of energy intensity and CO 2 emission reduction amount in optimal solutions are 524.00 kgce and 125.03 kg per ton steel respectively, which are 6.3% and 7.6% lower than the deterministic optimal ones; (3) The decision strategies encourage the wider application of large-sized process equipment, identify 8-9 advanced technology and eight reutilization approaches as key measures, but find the use of renewable energy will still be in low level. This study aims to solve the industrial ECCER optimization problem under uncertainty, and put forward policy suggestions in sustainable manufacturing in this industry. • Energy conservation, CO 2 emission reduction, and cost control are set as objectives. • Random sampling is used to simulate the fluctuation of uncertain parameters. • This study adopts NSGA-II to search for optimal solutions. • Final decision strategies are generated via a Vague set based approach. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. gMig: Efficient vGPU Live Migration with Overlapped Software-Based Dirty Page Verification.
- Author
-
Lu, Qiumin, Zheng, Xiao, Ma, Jiacheng, Dong, Yaozu, Qi, Zhengwei, Yao, Jianguo, He, Bingsheng, and Guan, Haibing
- Subjects
- *
GRAPHICS processing units , *COMPUTER architecture - Abstract
This paper introduces gMig, an open-source and practical vGPU live migration solution for full virtualization. Taking the advantage of the dirty pattern of GPU workloads, gMig presents the One-Shot Pre-Copy mechanism combined with the hashing based Software Dirty Page technique to achieve efficient vGPU live migration. Particularly, we propose three core techniques for gMig: 1) Dynamic Graphics Address Remapping, which parses and manipulates GPU commands to adjust the address mapping and adapt to a different environment after migration, 2) Software Dirty Page, which utilizes a hashing based approach with sampling pre-filtering to detect page modification, overcomes the commodity GPU's hardware limitation, and speeds up the migration by only sending the dirtied pages, 3) Overlapped Migration Process, which significantly compresses the hanging overhead by overlapping the dirty page verification and transmission concurrently. Our evaluation shows that gMig achieves GPU live migration with an average downtime of 302 ms on Windows and 119 ms on Linux. With the help of Software Dirty Page, the number of GPU pages transferred during the downtime is effectively reduced by up to 80.0 percent. The design of sampling filter and overlapped processing can bring about further 30.0 and 10.0 percent improvements in page processing. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Highly-efficient removal of Rhodamine B using a flow-through electrocatalytic filtration system: Characteristics, efficiency and mechanism.
- Author
-
Hu, Jinfei, Zhu, Hang, Lin, Min, Wu, Duoer, Yao, Jianguo, Sun, Tianyu, Ma, Xiangjuan, and Xia, Yijing
- Subjects
- *
RHODAMINE B , *STANNIC oxide , *WATER filtration , *DENSITY functional theory , *CHARGE transfer - Abstract
• An electrocatalytic filtration system by a Ti/SnO 2 -Sb membrane anode was developted. • Effects of various operating parameters for RhB degradation were optimized. • Degradation mechanism of RhB in the electrocatalytic filtration system was proposed. • Electrocatalytic filtration system can efficiently eliminate RhB from wastewater. The objective of this research is to develop an effective electrocatalytic filtration system using Ti/SnO 2 -Sb membrane anodes for eliminating Rhodamine B (RhB) from wasteater. Compared to Ti/SnO 2 -Sb plate anode, the prepared Ti/SnO 2 -Sb membrane anodes featured smaller particle size, bigger electrocatalytic active surface area, lower charge transfer resistance, and higher potential for oxygen evolution. The electrocatalytic filtration system obtained RhB removal of 90.29% with EE/O of 5.51 kWh m−3 after 90 min of electrolysis when initial pH value was 5, current density was 10 mA cm−2, initial RhB concentration was 50 mg L−1, and membrane flux was 2.17 m3 (m2·h)−1, respectively. The flow-through electrocatalytic filtration system was found to be limited by kinetics under a high membrane flux and dominated by an indirect radical oxidation. Finally, the possible RhB degradation pathways were proposed based on the detected intermediates and density functional theory calculations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Measuring Instability of Mobility Management in Cellular Networks.
- Author
-
Zhao, Xiaohui, Ma, Hanyang, Jin, Yuan, and Yao, Jianguo
- Subjects
- *
MOBILITY management (Mobile radio) , *CELLULAR immunity , *MOBILE communication systems , *DATA acquisition systems - Abstract
Communication in cellular networks is based on serving cells that provide the basic network service. In the real world, serving cells overlap which means the number of serving cells covering one position is usually more than one. Recently, the instability of mobility management in cellular networks has been studied to monitor and analyze the handoff process in mobile devices. However, the handoff process is actually produced by base stations instead of mobile devices. Hence, it is of great importance to measure the handoff process of mobility management from the base station side. In this article, we present a series of experiments performed using the data obtained by mobile network operators. The contributions of this study are three-fold. We reproduce a handoff process and handoff loop from both the mobile device level and the base station level, and confirm the existence of a handoff loop by measurements from the base station side. Through large-range measurements, we discover that only a small part of serving cells is involved in the handoff process, and in most cases, the number of candidate serving cells is much smaller than the number of cells that cover some position; namely, when a handoff loop occurs, the number of candidate serving cells is quite small, which is in contrast to our assumption. We confirm that the handoff loop often occurs in indoor conditions or when the mobile device has frequent communication with the base station. Finally, we present several comprehensive facts about the handoff process and handoff loop and provide suggestions that can be used to increase the quality of service of cellular networks. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Load Following of Multiple Heterogeneous TCL Aggregators by Centralized Control.
- Author
-
Hu, Jianqiang, Cao, Jinde, Chen, Michael Z. Q., Yu, Jie, Yao, Jianguo, Yang, Shengchun, and Yong, Taiyou
- Subjects
- *
ELECTRIC power systems , *EVALUATION , *TRAJECTORY optimization , *SIMULATION methods & models , *ALGORITHMS - Abstract
Aggregate thermostatically controlled loads (TCLs) are good candidates for providing load following services in power systems. This paper is concerned with the modeling, evaluation, and control problems of a population of heterogeneous TCLs. Specifically, the heterogeneous population is divided into multiple homogeneous clusters and each cluster, i.e., TCL aggregator, is modeled by an approximated three-input single-output state space model. Here, the aggregators serve as a bridge connecting the load utility and the terminal TCLs, which have their own decision makers and are responsible for aggregate estimation and command issuing. And aggregate evaluation is carried out for the aggregator so as to provide the aggregate regulation capacities and ramping rates, which is useful for setting of the reference power trajectory. Based on the established control model, we furthermore propose a hierarchical centralized control algorithm for a bus load utility to regulate all TCLs inside it so as to provide load following service, while not affecting the customers’ comfort levels. Finally, simulation results with respect to a common bus load are provided to demonstrate the effectiveness of the proposed aggregate modeling and the centralized load following strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. A consensus control strategy for dynamic power system look-ahead scheduling.
- Author
-
Li, Yaping, Yong, Taiyou, Cao, Jinde, Ju, Ping, Yao, Jianguo, and Yang, Shengchun
- Subjects
- *
COMPUTER buses , *COOPERATIVE control systems , *CONTROL rooms , *LOAD balancing (Computer networks) , *COMPUTER simulation - Abstract
Flexible loads are important resources to help maintain the power balance of smart grid. However, the control center is not able to exactly sense and control them because of their large quantity and wide distribution. In order to get the flexible loads to participate in the dynamic power system look-ahead scheduling, this paper proposes a three-layer ‘centralized coordination, distributed control’ structure in which the load agents are introduced to perform the coordination based on the consensus control. Then, the optimal control strategy of the control center and the consensus control strategy of the load agents are designed. With the communication among different flexible loads, the distributed cooperative control is implemented. At last, the efficiency of the proposed mode and strategies is proven through the simulation carried out under the standard IEEE 9-bus system, and the effect of topology and communication is also analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Optimal economic dispatch model based on risk management for wind‐integrated power system.
- Author
-
Wu, Junli, Zhang, Buhan, Wang, Ke, Shao, Jian, Yao, Jianguo, Zeng, Dan, and Ge, Tengyu
- Abstract
In recent years, the proportion of wind power capacity is increased dramatically for carbon emissions reduction in several countries. However, wind power generation cannot be dispatched as conventional thermal power generation because of its randomness and intermittency. To deal with the increased uncertainty, a probabilistic model is established to analyse the uncertainty of wind power and load. Conditional value at risk index is applied to assess risk including the loss of load and 'spilling' wind energy associated with unpredictable imbalances between generation and load. The cost–risk model is proposed by minimising the operation cost and risk in optimal economic dispatch problem. The study uses multiple objective particle swarm optimisation to solve the model and obtain the Pareto‐optimal solutions, which can reflect the relationship between risk and cost. The optimal solution can be determined in Pareto‐optimal set by risk management method based on analysis of the risk marginal cost. The model and methods are tested on the IEEE 30‐bus power system. The results demonstrate the proposed method can control the cost and risk in effective way. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
24. An Efficient Decomposition Method for the Integrated Dispatch of Generation and Load.
- Author
-
Zhong, Haiwang, Xia, Qing, Kang, Chongqing, Ding, Maosheng, Yao, Jianguo, and Yang, Shengchun
- Subjects
- *
MATHEMATICAL decomposition , *CONSTRAINTS (Physics) , *CONSTRAINT programming , *CONSTRAINED optimization , *MIXED integer linear programming - Abstract
In response to the computational challenges produced by the integrated dispatch of generation and load (IDGL), this paper proposes a novel and efficient decomposition method. The IDGL is formulated using the mixed-integer quadratic constrained programming (MIQCP) method. To efficiently solve this complex optimization problem, the nodal equivalent load shifting bidding curve (NELSBC) is proposed to represent the aggregated response characteristics of customers at a node. The IDGL is subsequently decomposed into a two-level optimization problem. At the upper level, grid operators optimize load shifting schedules based on the NELSBC of each node. Transmission losses are explicitly incorporated into the model to coordinate them with generating costs and load shifting costs. At the bottom level, customer load adjustments are optimized at individual nodes given the nodal load shifting requirement imposed by the grid operators. The key advantage of the proposed method is that the load shifting among different nodes can be coordinated via NELSBCs without iterations. The proposed decomposition method significantly improves the efficiency of the IDGL. Parallel computing techniques are utilized to accelerate the computations. Using numerical studies of IEEE 30-bus, 118-bus, and practically sized 300-bus systems, this study demonstrates that accurate and efficient IDGL scheduling results, which consider the nonlinear impact of transmission losses, can be achieved. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
25. Automated on-line liquid chromatography–photodiode array–mass spectrometry method with dilution line for the determination of bisphenol A and 4-octylphenol in serum
- Author
-
Liu, Min, Hashi, Yuki, Pan, Fengyun, Yao, Jianguo, Song, Guanqun, and Lin, Jin-Ming
- Subjects
- *
CHROMATOGRAPHIC analysis , *BLOOD plasma , *LIQUID chromatography , *MASS spectrometry - Abstract
Abstract: A novel on-line liquid chromatography–photodiode array detection–mass spectrometry (LC–DAD–MS) system was established with restricted-access media (RAM) pre-column and dilution line combined with a column-switching valve. The serum samples were injected directly onto pre-column under diluted condition by dilution line. After elution of proteins in the serum, the analytes were backflushed onto an ODS analytical column using a six-port column-switching device. The influence of the composition of the mobile phase, for instance, organic modifer, ionic strength, pH, dilution times and the rotation time of the switching valve have been investigated using bisphenol A (BPA) and 4-octyphenol (4-OP) as analytes. The evaluations for peak responses and sensitivity were conducted by MS, and proteins were removed by RAM-column with DAD monitoring at 280nm. The peak shape was improved by adding a dilution line, especially in the case of large volume injection (LVI), which increased the sensitivity of the analysis. The selective and sensitive quantification of BPA and 4-OP in serum sample could be finished within 25min. The method had linearity in the range 0.1–500ng/mL with a limit of quantification for BPA and 4-OP of 0.1 and 0.5ng/mL, respectively. The recoveries were in the range of 80–101% with less than 9.0% RSDs. This on-line LC–MS method demonstrates potential application to evaluating the exposure and risk of BPA and 4-OP in human. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
26. Driving effect of BDNF in the spinal dorsal horn on neuropathic pain.
- Author
-
Zhou, Wu, Xie, Zhiping, Li, Chengcai, Xing, Zelong, Xie, Shenke, Li, Meihua, and Yao, Jianguo
- Subjects
- *
NEURALGIA , *BRAIN-derived neurotrophic factor , *NEURAL transmission , *MOTOR vehicle driving , *NEUROPLASTICITY - Abstract
• BDNF in the spinal dorsal horn are closely connected with neuropathic pain. • In physiological conditions, BDNF is an important regulator of neuronal development, synaptic transmission, and cellular and synaptic plasticity. • In pathological conditions, BDNF in the spinal dorsal horn may change the CNS from an adaptive state to an unadaptive state. • Elucidate the mechanism of BDNF in spinal dorsal horn and neuropathic pain. Neuropathic pain (NP) is caused by direct or indirect damage to the nervous system and is a common symptom of many diseases. The mechanisms underlying the onset and persistence of NP are unclear. Therefore, research concerning these mechanisms has become an important focus in the medical field. Brain-derived neurotrophic factor (BDNF) is a member of the neurotrophic factor family of signaling molecules. BDNF is an important regulator of neuronal development, synaptic transmission, and cellular and synaptic plasticity, which are essential for nerve maintenance and repair. However, BDNF is upregulated in the spinal dorsal horn and can promote NP by activating glial cells, reducing inhibitory functions and enhancing excitement after nociceptive stimulation. This review considers the relationship between NP and BDNF signaling in the spinal dorsal horn and discusses potentially related pathological mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. A parallel multi-scenario learning method for near-real-time power dispatch optimization.
- Author
-
Guan, Jinyu, Tang, Hao, Wang, Ke, Yao, Jianguo, and Yang, Shengchun
- Subjects
- *
INTERACTIVE learning , *REINFORCEMENT learning , *POWER resources , *DEEP learning , *CLASSROOM environment , *RENEWABLE natural resources , *WIND power - Abstract
Power dispatch problems become more complex when the weight of uncertain renewable resources in the power system gradually increases in recent years. To make use of renewable energy, such as wind energy, more adequately, wisely and intelligently, higher requirements are placed on the level of inter-region power dispatch coordination. In the context, solving the problem of power dispatch on a large scale in near-real-time (5 min in this paper) becomes more important. In this paper, the power dispatch was treated as a sequential decision-making problem and Deep Reinforcement Learning (DRL) with continuous control was introduced to offer a smarter solution. In this way, we designed a novel interactive learning environment based on the economic power dispatch model for the DRL algorithm and we proposed two feasible implementations to handle the different application scenarios. As a result, DRL with a continuous control method has a great performance in our proposed implementations. Moreover, we found that dispatching data richness has a significant influence on the generalization of the learned policy. Image 1 • A novel learning method for ED within 5-min considering the injection of large-scale wind power is presented. • This paper designed an interactive parallel learning framework based on multiple source-lord scenarios. • It improves learning efficiency and brings effective utilization of historical data. • Data richness has a significant influence on the generalization of the learned policy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Relationship between mechanical properties and processing performance of agglomerated diamond abrasive compared with single diamond abrasive.
- Author
-
Chen, Jiapeng, Zhu, Yongwei, Wang, Jianbin, Peng, Yanan, Yao, Jianguo, and Ming, Shun
- Subjects
- *
ABRASIVES , *SAPPHIRES , *FRETTING corrosion , *SURFACE topography , *DIAMONDS , *SURFACE roughness , *ROUGH surfaces - Abstract
Fixed abrasive is taking the place of loose abrasive owing to its high efficiency and little pollution. The self-conditioning ability of fixed abrasive pads (FAPs) plays a favorable role in the lapping process. In this paper, agglomerated diamond (AD) abrasives were prepared for the sapphire lapping process. Material removal rate (MRR), material removal rate variation (MRRV) and surface topography of sapphire wafers lapped by fixed AD abrasive pad were investigated. Abrasive wear and wear debris were characterized. The material removal mechanisms of AD abrasives were discussed. Comparison of mechanical properties between AD and SD in abrasive wear process was analyzed. Influence of mechanical properties of AD abrasives on processing performance was explained. Comparing with fixed single diamond (SD) abrasive pads, the higher MRR, the lower MRRV and surface roughness of sapphire wafer were obtained in the fixed AD abrasive lapping process. Stable large protrusion height and consecutive sharp edges of AD abrasives ensure the excellent lapping performance. The protrusion height of abrasives is positive correlated with the self-conditioning ability of FAPs. The stable protrusion height of AD abrasives is attribute to two points: one is the rough surface of an AD abrasive with multiple sharp cutting edges improves the bonding strength between the abrasives and the pad matrix; the other is the detached diamond particles due to the micro-fracture phenomenon of AD abrasives accelerates the abrasion of matrix. Unlabelled Image • Agglomerated diamond (AD) abrasives with multiple cutting edges are prepared of diamond and ceramics. • Mechanical properties of AD abrasives, including micro-fracture property, are evaluated. • Material removal characteristics of AD abrasives in the case of abrasive wear are analyzed. • Relationship between mechanical properties and processing performance of AD abrasives is confirmed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. Nonintrusive load monitoring in residential households with low-resolution data.
- Author
-
Shi, Xin, Ming, Hao, Shakkottai, Srinivas, Xie, Le, and Yao, Jianguo
- Subjects
- *
HOUSEHOLDS , *ENERGY consumption , *COST control , *COMPETITION (Psychology) , *LOAD forecasting (Electric power systems) , *TIME-domain analysis - Abstract
• A novel nonintrusive load monitoring algorithm applicable to low-resolution data is proposed. • The algorithm improves the disaggregation accuracy and computational efficiency. • An analysis of how the data resolution affects the prediction performance is performed. • A cross-prediction approach is proposed that does not require a household's own historical data. • Four real-world datasets are used to validate the effectiveness of the methodology. Detailed information on individual appliance consumption is beneficial for improving energy efficiency and managing demand response. Nonintrusive load monitoring (NILM) aims to estimate the device-level energy consumption from the load data of an entire household. Because the majority of households can only provide load data at a normal smart-meter level, this paper introduces a novel similar time window (STW) algorithm to perform NILM with lower-resolution data. Derived from k-nearest neighbors (kNN), the proposed STW algorithm compares both the time and frequency domain similarities between windows of interest and historical data segments, and then selects the most similar time windows by instance-based learning to determine the device-level energy consumption. The desirable features of this algorithm include (1) reductions in the costs of and requirements for sensing equipment, (2) improvements in privacy preservation, and (3) a significant enhancement of the computational efficiency. To facilitate the selection of the data resolution and to satisfy the NILM application requirements in a cost-effective way, the paper also investigates the relationship among the input/output data resolution, time window length and prediction accuracy. To enable the generalizability of this algorithm, a cross-prediction approach is proposed to obtain the device-level consumption from a "library" of a group of households, without knowing each one's own historical data. Simulation results using four real-world public datasets demonstrate the competitive performance of the proposed STW algorithm with respect to traditionally used approaches for low-resolution NILM. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.