14 results on '"Throughput optimization"'
Search Results
2. Optimal 4QAM backscatter modulation for passive UHF CRFID tags.
- Author
-
Zhao, Jumin, Wang, Ganzhi, Li, Dengao, Xu, Shuang, Guo, Xiuzhen, and Li, Yajun
- Subjects
QUADRATURE amplitude modulation ,REFLECTANCE ,ENERGY consumption ,INTERNET of things ,INDUSTRIAL efficiency ,BACKSCATTERING - Abstract
With the development of the Internet of Things (IoT), the amount of transmitted data is getting huge. Traditional passive UHF CRFID Tag currently adopts 2ASK or 2PSK backscatter modulation, and each symbol conveys 1 bit of information. Still, this modulation has the problems of low throughput, poor real-time, and low energy utilization. This paper proposes an optimal 4 Quadrature Amplitude Modulation (4QAM) backscatter scheme at 920 MHz, which can increase the throughput and simultaneously improve energy utilization. In this optimal scheme, the reflection coefficient modulus | Γ | is closely related to the energy. The low power of the passive tag requires that the modulation circuit's design consider how to choose | Γ | o p t (optimal reflection coefficient modulus). Therefore, we build the energy model of CRFID to explore the | Γ | o p t at different symbol rates and use the | Γ | o p t to design a more reasonable constellation diagram for passive tags. Finally, this paper analyzes the decoding of multinary backscatter signals, and an improved Kmeans algorithm can solve the cluster offset problem at high symbol rates. After experiments, | Γ | o p t is around 0.67, the maximum throughput of 4QAM backscatter modulation can reach 16 Mbit/s, and the power consumption is 4.38 mW. Compared with the traditional binary modulation, the tag in this paper can double the throughput at the same symbol rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Energy-aware data throughput optimization for next generation internet.
- Author
-
Kosar, Tevfik, Alan, Ismail, and Bulut, M. Fatih
- Subjects
- *
DATA transmission systems , *ELECTRIC power consumption , *HTTP (Computer network protocol) , *INTERNET traffic , *ALGORITHMS - Abstract
Abstract According to recent statistics, more than 1 zettabytes of data is moved over the Internet annually, which consumes several terawatt hours of electricity, and costs billions of US dollars to the world economy. Hypertext Transfer Protocol (HTTP) is used in the majority of these data transfers, accounting for 70% of the global Internet traffic. We claim that HTTP transfers, and the services based on HTTP, can become more energy efficient without any performance degradation by application-level tuning of certain protocol parameters. In this paper, we analyze several application-level parameters that affect the throughput and energy consumption in HTTP data transfers, such as the level of parallelism, concurrency, and pipelining. We introduce novel service-level-agreement (SLA) based algorithms which can decide the best combination of these parameters considering user-defined energy efficiency and performance criteria. Our experimental results show that up to 80% energy savings can be achieved at the client and server hosts during HTTP data transfers, while increasing the end-to-end transfer throughput at the same time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
4. Throughput optimization in cognitive wireless network based on clone selection algorithm.
- Author
-
Zheng-Yi, Chai, Xue-yang, Yan, Ya-lun, Li, and Si-Feng, Zhu
- Subjects
- *
COGNITIVE radio , *NP-hard problems , *INTERFERENCE (Telecommunication) , *COMPUTATIONAL complexity , *ENCODING - Abstract
In cognitive wireless network, throughput scheduling optimization under interference temperature constraints has attracted more attentions in recent years. A lot of works have been investigated on it with different scenarios. However, these solutions have either high computational complexity or relatively poor performance. Throughput scheduling is a constraint optimization problem with NP(Non-deterministic Polynomial) hard features. In this paper, we proposed an immune-clone based suboptimal algorithm to solve the problem. Suitable immune clone operators are designed such as encoding, clone, mutation and selection. The simulation results show that our proposed algorithm obtains near-optimal performance and operates with much lower computational complexity. It is suitable for slowly varying spectral environments. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. Joint topology control and routing for multi-radio multi-channel WMNs under SINR model using bio-inspired techniques.
- Author
-
Jia, Jie, Chen, Jian, Yu, Jianglei, and Wang, Xingwei
- Subjects
ROUTING (Computer network management) ,MULTICHANNEL communication ,WIRELESS mesh networks ,SIGNAL-to-noise ratio ,NP-complete problems - Abstract
Multi-channel communication in a wireless mesh network (WMN) equipped with multi-radio routers can significantly enhance the network capacity. Channel allocation, power control and routing are three main issues involved in the performance of multi-channel multi-radio WMNs. In this paper, the joint optimization of channel allocation, power control and routing under signal-to-interference-and-noise ratio (SINR) model for multi-channel multi-radio WMNs is investigated. It is proven to be NP hard. As we know, no optimal polynomial time solutions have been proposed in the previous literatures. In order to tackle this problem, we apply bio-inspired optimization techniques for channel allocation and power control, and use linear programming for routing optimization. To reflect the cross-layer interaction property among these three issues, the routing optimization is further defined as the fitness value of a chromosome in bio-inspired optimization. Further, we propose an effective joint optimization framework, in which two representative bio-inspired optimization methods (genetic algorithm and particle swarm optimization algorithm) are hybridized to enhance the searching ability. The detailed evolution processes for both genetic algorithm and particle swarm optimization algorithm are demonstrated. Extensive simulation results show that the proposed algorithm converges fast and approaches the sub-optimal solution effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. A genetic approach on cross-layer optimization for cognitive radio wireless mesh network under SINR model.
- Author
-
Jia, Jie, Wang, Xingwei, and Chen, Jian
- Subjects
WIRELESS mesh networks ,CROSS layer optimization ,COGNITIVE radio ,SIGNAL-to-noise ratio ,GENETIC algorithms - Abstract
Due to the limited spectrum resources and the differences of link loads, how to obtain maximum network throughput through cross-layer design under signal-to-interference-and-noise ratio (SINR) model is recognized as a fundamental but hard problem. For this reason, the throughput maximization problem jointly with power control, channel allocation and routing under SINR model is researched. First, by formulating the optimization model and digging up its special structure, we show that the throughput maximization problem can be decomposed into two sub-problems: a channel allocation and power control sub-problem at the link-physical layer, and a throughput optimization sub-problem at the network layer. As to the link-physical layer sub-problem, since the joint optimization on channel allocation and power control is NP hard, we apply genetic algorithm for searching the optimal solution. As to the network layer sub-problem, we use linear programming technique for throughput optimization. To reflect the interplay property among these three layers, the fitness of each individual in the genetic algorithm is evaluated by solving the network layer sub-problem. Therefore, an effective cross-layer optimization framework based on genetic algorithm is obtained, which can find optimized power control, channel allocation and route selection in polynomial time. In order to enhance the convergence process during evolution, the integer based representation scheme and corresponding genetic operators are well designed with appropriate constraint control mechanisms. Extensive simulation results demonstrate that the proposed scheme obtains higher network throughput compared to the pervious works with comparable computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
7. TDOCP: A two-dimensional optimization integrating channel assignment and power control for large-scale WLANs with dense users.
- Author
-
Jiang, Hao, Zhou, Chen, Wu, Lihua, Wang, Hao, Lu, Zheng, Ma, Li, and Li, Yuan
- Subjects
TWO-dimensional models ,WIRELESS LANs ,HEURISTIC algorithms ,COMPUTER users ,MATHEMATICAL optimization ,COMPUTER networks - Abstract
In order to improve the throughput of high-density and large-scale wireless local area networks (WLANs), a novel heuristic algorithm T wo - D imensional O ptimization Integrating C hannel Assignment and P ower Control ( TDOCP ) is proposed. Based on the traffic characteristic analyzed from real network, we take both uplink and downlink traffic into account in network modeling. The analysis of network utility shows that it has a controllable upper bound. So this paper develops a scheme to maximize the upper bound of network utility, and then make network utility converge to the enhanced upper bound. Both channel assignment (CA) and power control (PC) are performed in each iteration. To illustrate the advantages of TDOCP, a compared algorithm O ne- D imensional O ptimization Integrating C hannel Assignment and P ower Control ( ODOCP ) is designed, which implements the least congested channel search (LCCS) in CA, and the power control for AP performance (PCAP) in PC. Only one-dimensional optimization is implemented in each iteration of ODOCP. Extensive simulations show that the network model in this paper is effective, and TDOCP outperforms current popular one-dimensional optimization algorithms LCCS, PCAP, and the compared ODOCP in the aspect of increasing throughput and reducing network delay. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
8. Obstacles constrained group mobility models in event-driven wireless networks with movable base stations.
- Author
-
Cristaldi, S., Ferro, A., Giugno, R., Pigola, G., and Pulvirenti, A.
- Subjects
CONSTRAINT satisfaction ,WIRELESS communications ,AD hoc computer networks ,SIGNAL processing ,ALGORITHMS ,GRAPH theory - Abstract
Abstract: In this paper, we propose a protocol for dynamic reconfiguration of ad-hoc wireless networks with movable base stations in presence of obstacles. Hosts are assigned to base stations according to a probabilistic throughput function based on both the quality of the signal and the base station load. In order to optimize space coverage, base stations cluster hosts using a distributed clustering algorithm. Obstacles may interfere with transmission and obstruct base stations and hosts movement. To overcome this problem, we perform base stations repositioning making use of a motion planning algorithm on the visibility graph based on an extension of the bottleneck matching technique. We implemented the protocol on top of the NS2 simulator as an extension of the AODV. We tested it using both Random Way Point and Reference Point Group mobility models properly adapted to deal with obstacles. Experimental analysis shows that the protocol ensures the total space coverage together with a good throughput on the realistic model (Reference Point Group) outperforming both the standard AODV and DSR. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
9. Cost-based vectorization of instance-based integration processes
- Author
-
Boehm, Matthias, Habich, Dirk, Preissler, Steffen, Lehner, Wolfgang, and Wloka, Uwe
- Subjects
- *
COST analysis , *VECTOR analysis , *WORKFLOW , *INFORMATION processing , *PERFORMANCE evaluation , *MATHEMATICAL optimization , *COMPUTATIONAL complexity - Abstract
Abstract: Integration processes are workflow-based integration tasks. The inefficiency of these processes is often caused by low resource utilization and significant waiting times for external systems. With the aim to overcome these problems, we proposed the concept of process vectorization. There, instance-based integration processes are transparently executed with the pipes-and-filters execution model. The term vectorization is used in the sense of processing a sequence (vector) of messages by one standing process. Although it has been shown that process vectorization achieves a significant throughput improvement, this concept has two major drawbacks. First, the theoretical performance of a vectorized integration process mainly depends on the performance of the most cost-intensive operator. Second, the practical performance strongly depends on the number of used threads and thus, on the number of operators. In this paper, we present an advanced optimization approach that addresses the mentioned problems. We generalize the vectorization problem and explain how to vectorize process plans in a cost-based manner taking into account the cost of the single operators in the form of their execution time. Due to the exponential time complexity of the exhaustive computation approach, we also provide a heuristic algorithm with linear time complexity. Furthermore, we explain how to apply the general cost-based vectorization to multiple process plans and we discuss the periodical re-optimization. In conclusion of our evaluation, the message throughput can be significantly increased compared to both the instance-based execution as well as the rule-based vectorized execution. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
10. Calculation of traffic flow breakdown probability to optimize link throughput
- Author
-
Wang, Haizhong, Rudy, Kimberly, Li, Jia, and Ni, Daiheng
- Subjects
- *
TRAFFIC flow , *PROBABILITY theory , *MATHEMATICAL optimization , *MARKOV processes , *EMPIRICAL research , *QUANTITATIVE research , *PERFORMANCE evaluation - Abstract
Abstract: Traffic breakdown phenomenon is prevalent in empirical traffic system observations. Traffic flow breakdown is usually defined as an amount of sudden drop in traffic flow speed when traffic demand exceeds capacity. Modeling and calculating traffic flow breakdown probability remains an important issue when analyzing the stability and reliability of transportation system. The breakdown mechanism is still mysterious to practitioners and researchers in varying manner. Treating breakdown as a random event, this paper use discrete time Markov chain (DTMC) to model traffic state transition path, as a result, a transition probability matrix can be generated from empirical observations. From empirical analysis of breakdown, we found this formulation of breakdown probability follows the Zipf distribution. Therefore, a connection from traffic flow breakdown probability to how many vehicles are occupying a certain freeway segment (e.g. a link) will be established. Following from the results, a quantitative measure of breakdown probability can be obtained to optimize ramp metering rates to achieve optimum system performance measures. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
11. Model-based control for throughput optimization of automated flats sorting machines
- Author
-
Tarău, A.N., De Schutter, B., and Hellendoorn, J.
- Subjects
- *
AUTOMATIC control systems , *SORTING devices , *MAIL sorting , *MATHEMATICAL optimization , *PREDICTIVE control systems - Abstract
Abstract: Mail items of A4 size are called flats. In order to handle the large volumes of flats that have to be processed, state-of-the-art post sorting centers are equipped with dedicated flats sorting machines. The throughput of a flats sorting machine is crucial when dealing with a continually increasing number of items to be sorted in a certain time. But, the throughput is limited by the mechanical constraints. In order to optimize the efficiency of this sorting system, in this paper, several design changes are proposed and advanced model-based control methods such as optimal control and model predictive control are implemented. An event-based model of the flats sorting system is also determined using simulation. The considered control methods are compared for several scenarios. The results indicate that by using the proposed approaches the throughput can be increased with over 20%. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
12. X-ray diffraction imaging—A multi-generational perspective
- Author
-
Harding, G.
- Subjects
- *
EXPLOSIVES detection , *X-ray diffraction , *NARCOTICS , *TOMOGRAPHY , *FEASIBILITY studies , *BAYESIAN analysis - Abstract
Abstract: A brief description is given of some applications of X-ray diffraction imaging (XDI) in security screening, including detection of narcotics and a wide range of explosives: organic (plastic) explosives, liquids, home-made explosives (HMEs) and special nuclear materials (SNMs). A Bayesian formulation of the “rare event scenario” is presented, allowing the probability to be quantified that an unlikely threat is indeed present when an uncertain detection system raises an alarm. Granted the utility of X-ray diffraction (XRD) as a significant screening modality for false-alarm resolution, the topic of its technological feasibility is addressed. It is shown that, in analogy to computed tomography, XDI permits a significant reduction to be achieved in measurement time per object volume element (voxel) compared with that of a classical X-ray diffractometer. This reduction can be accomplished by designing the XDI system to record energy-dispersive XRD profiles from many volume elements (object voxels) in parallel. A general scheme for designing “massively-parallel” (MP) XDI systems is presented. XDI configurations of the first generation (1 voxels−1), second generation (100 voxelss−1) and third generation (104 voxelss−1) are presented and discussed. Three alternative 3rd Generation XDI geometries, namely: direct fan-beam; parallel (waterfall) beam; and inverse fan-beam are compared with respect to technological realization. Directions for future development of XDI in screening applications are outlined. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
13. Optimization of demolding temperature for throughput improvement of nanoimprint lithography
- Author
-
Leveder, T., Landis, S., Davoust, L., and Chaix, N.
- Subjects
- *
PLASTICS , *LITHOGRAPHY , *PRINTS , *PHYSICS - Abstract
Abstract: Annealing effects onto the reflow of imprinted resist patterns have been investigated on 250nm dense line arrays printed with standard hot embossing lithography and thermoplastic polymer. Atomic force microscopy measurements were performed to point out the annealing temperature and time effects, respectively. The reflow velocity with respect to annealing temperature has been determined. Its variation is ascribed to both resist dynamic viscosity and surface free energy. Our approach demonstrated that imprint cycle time could be significantly reduced by saving cooling down time. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
14. Scalable and fair resource sharing among 5G D2D users and legacy 4G users: A game theoretic approach.
- Author
-
Mukherjee, Sreetama and Ghosh, Sasthi C.
- Subjects
5G networks ,4G networks ,RESOURCE allocation ,SHARING ,FAIRNESS ,GAMES - Abstract
We propose a game theoretic approach to the problem of resource allocation in fifth generation device to device communications underlying cellular networks, where modern fifth generation device to device users share channel resources of legacy fourth generation cellular users. Though such sharing improves scalability as more and more users can be served, the individual throughput obtained by a cellular user may get reduced when its channel resource is shared by a device to device user pair. Thus our aim is to increase the overall system throughput while ensuring effective user fairness for the legacy cellular users served. Our approach provides a tradeoff between the two objectives thereby ensuring no legacy cellular users are compromised heavily while encouraging modern device to device users. We have addressed this fairness problem by thresholding the reduction of the throughput of a cellular user with respect to its ideal throughput if not reused. We have proposed a game theoretic model in order to make strategic decision for selecting the preferred action configuration in order to meet this objective. We have also discussed about the convergence and the time bound of the proposed game model with a clear analysis of the scenarios encountered during the process of shared consumption of resources among cellular users and the device to device users. Together with this game model, convergence and time bound analyses and two validation processes, we aim to improve scalability, overall system throughput without compromising the fairness of the cellular users. We have also compared the user fairness achieved by our approach with two existing approaches to demonstrate the effectiveness of our game theoretic approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.