1,660 results
Search Results
2. Acquiring supply chain agility through information technology capability: the role of demand forecasting in retail industry
- Author
-
Bai, Bingfeng
- Published
- 2023
- Full Text
- View/download PDF
3. A new algorithm for detecting cloud height using OMPS/LP measurements.
- Author
-
Chen, Z., DeLand, M., and Bhartia, P. K.
- Subjects
ALGORITHM research ,CLOUDS ,HEIGHT measurement - Abstract
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
4. A hybrid data-mining framework for train rescheduling strategy pattern discovery.
- Author
-
Chen, Ruirui, Ge, Xuekai, Huang, Ping, and Wen, Chao
- Subjects
DATA mining ,AUTOMATIC extracting (Information science) ,DATABASE searching ,ALGORITHM research ,DATA science - Abstract
This study presents a hybrid data-mining framework based on feature selection algorithms and clustering methods to perform the pattern discovery of high-speed railway train rescheduling strategies (RSs). The proposed model is composed of two states. In the first state, decision tree, random forest, gradient boosting decision tree (GBDT) and extreme gradient boosting (XGBoost) models are used to investigate the importance of features. The features that have a high influence on RSs are first selected. In the second state, a K-means clustering method is used to uncover the interdependences between RSs and the influencing features, based on the results in the first state. The proposed method can determine the quantitative relationships between RSs and influencing factors. The results clearly show the influences of the factors on RSs, the possibilities of different train operation RSs under different situations, as well as some key time periods and key trains that the controllers should pay more attention to. The research in this paper can help train traffic controllers better understand the train operation patterns and provides direction for optimizing rail traffic RSs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Remote sensing of chlorophyll in the Baltic Sea at basin scale from 1997 to 2012 using merged multisensor data.
- Author
-
Pitarch, J., Volpe, G., Colella, S., Krasemann, H., and Santoleri, R.
- Subjects
CHLOROPHYLL ,REMOTE sensing ,ALGORITHM research - Abstract
Fifteen-year (1997-2012) time series of chlorophyll a (CHL) in the Baltic Sea, based on merged multisensor satellite data provided by the European projects Globcolour and ESA-OC-CCI were analysed. Several available CHL algorithms were sea-truthed against a large in situ CHL dataset consisting of data by Seadatanet, HELCOM and NOAA. Matchups were calculated for three separate areas (1) Skagerrak and Kattegat, (2) Baltic Proper plus gulfs of Riga and Finland, called here "Central Baltic", (3) Gulf of Bothnia, and for the three areas as a whole. Statistics showed low linearity. The OC4v6 algorithm (R
2 = 0.46, BIAS= +60%, RMS= 79% for the whole dataset) was linearly transformed by using the best linear fit (OC4corr). By construction, the bias was corrected, but RMS was increased instead. Despite this shortcoming, we demonstrated that errors between OC4corr and in situ data were log-normally distributed and centred at zero. Consequently, unbiased estimators of the horizontally-averaged CHL could be obtained, the error of which tends to zero when a large amount of pixels is averaged. From the basin-wide time series, the climatology and the annual anomalies were separated. The climatologies revealed completely different CHL dynamics among regions: in Skagerrak and Kattegat, CHL strongly peaks in late winter, with a minimum in summer and a secondary peak in spring. In the Central Baltic, CHL follows a dynamics of a spring CHL peak, followed by a much stronger summer bloom, with decreasing CHL towards winter. The Gulf of Bothnia shows a similar CHL dynamics as the central Baltic, although the summer bloom is absent. Across years, CHL showed great variability. Supported by auxiliary satellite sea-surface temperature (SST) data, we found that phytoplankton growth was inhibited in the central Baltic Sea in the years of colder summers or when the SST happened to increase later in the season. Extremely high CHL in spring 2008 was detected and linked to an exceptionally warm preceding winter. Sharp SST changes were found to induce CHL changes in the same direction. This phenomenon was appreciated best by overlaying the time series of the CHL and SST anomalies. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
6. Performance evaluation model and algorithm of green supply chain management based on sustainable computing.
- Author
-
He, Chao
- Subjects
SUPPLY chain management ,SUPPLY chains ,FOOD chains ,ALGORITHMS ,PERFORMANCE management - Abstract
How to facilitate collaborative development between the enterprise and the environment under the dual constraints of resources and the environment is the focus of today's green supply chain management system research. Through the performance evaluation of the green supply chain, we can understand the operation of the whole supply chain and its shortcomings, provide a basis for improving related processes, and have important practical significance for improving the competitiveness and protection of its products. First of all, by summarising and analysing the research status of sustainable supply chain management in different countries, the research idea and overall background of this paper are proposed. It discusses the theory of sustainable supply chain management and the performance evaluation system and calculation types of sustainable supply chain management. Finally, the relative weight of each index is determined based on the sustainability calculation method, and then the decentralisation degree of the index is constructed. During this period, the fuzzy comprehensive evaluation method is used to evaluate the performance of sustainable supply chain, conduct case analysis and summary, and evaluate the performance of green supply chain component in economic, social, environmental and other aspects. In this paper, representative companies are selected as examples to evaluate their green supply chain management performance, and the evaluation algorithm is studied based on sustainable calculation method. The results show that a reasonable and effective evaluation of the enterprise performance of green supply chain management and a sustainable algorithm study can effectively identify potential problems in the operation of the company and improve the overall operation of the company at this stage. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Sequential Discrete Kalman Filter for Real-Time State Estimation in Power Distribution Systems: Theory and Implementation.
- Author
-
Kettner, Andreas Martin and Paolone, Mario
- Subjects
BROADBAND communication systems ,KALMAN filtering ,PITMAN'S measure of closeness ,ALGORITHM research ,NOISE measurement - Abstract
This paper demonstrates the feasibility of implementing real-time state estimators for active distribution networks in field-programmable gate arrays (FPGAs) by presenting an operational prototype. The prototype is based on a linear state estimator that uses synchrophasor measurements from phasor measurement units. The underlying algorithm is the sequential discrete Kalman filter (SDKF), an equivalent formulation of the DKF for the case of uncorrelated measurement noise. In this regard, this paper formally proves the equivalence of SDKF and the DKF, and highlights the suitability of the SDKF for an FPGA implementation by means of a computational complexity analysis. The developed prototype is validated using a case study adapted from the IEEE 34-node distribution test feeder. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
8. Research on multi-sensor information fusion and intelligent optimization algorithm and related topics of mobile robots.
- Author
-
Guo, Yuan, Fang, Xiaoyan, Dong, Zhenbiao, and Mi, Honglin
- Subjects
MULTISENSOR data fusion ,FUZZY neural networks ,MATHEMATICAL optimization ,ARTIFICIAL intelligence ,INTELLIGENT transportation systems ,ALGORITHMS ,MOBILE robots - Abstract
Research on mobile robots began in the late 1960s. Mobile robots are a typical autonomous intelligent system and a hot spot in the high-tech field. They are the intersection of multiple technical disciplines such as computer artificial intelligence, robotics, control theory and electronic technology. The product not only has potentially very attractive application value and commercial value, but the research on it is also a challenge to intelligent technology. The development of mobile robots provides excellent research for various intelligent technologies and solutions. This dissertation aims to study the research of multi-sensor information fusion and intelligent optimization methods and the methods of applying them to mobile robot related technologies, and in-depth study of the construction of mobile robot maps from the perspective of multi-sensor information fusion. And, in order to achieve this function, combined with autonomous exploration and other related theories and algorithms, combined with the Robot Operating System (ROS). This paper proposes the area equalization method, equalization method, fuzzy neural network and other methods to promote the realization of related technologies. At the same time, this paper conducts simulation research based on the SLAM comprehensive experiment of the JNPF-4WD square mobile robot. On this basis, the high precision and high reliability of robot positioning are further realized. The experimental results in this paper show that the maximum error of the X-axis and Y-axis, FastSLAM algorithm is smaller than EKF algorithm, and the improved FASTSALM algorithm error is further reduced compared with the original FastSLAM algorithm, the value is less than 0.1. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Face and Eye Location of Multi-pose under Illumination.
- Author
-
Shuze Geng and Jing Luo
- Subjects
FACE perception ,ALGORITHM research ,INTEGRALS - Abstract
Active shape model (ASM) is one of the algorithm, which is widely used in object area location so far. However, the effect on detecting face in non-uniform illumination and multi-pose face image is not satisfied. In order to improve the accuracy of locating the face and eyes through ASM algorithm under non-uniform illumination, the paper puts forward the method of combining Gabor features in the direction of gradient mean and local ASM model, which can upgrade the robustness under the condition of non-uniform illumination. The experiment shows that compared with the standard ASM algorithm, this improved method can rise location monitoring of face and eyes by 11.79% and 18.35% respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2015
10. A Group-Discrimination-Based Access Point Selection for WLAN Fingerprinting Localization.
- Author
-
Lin, Tsung-Nan, Fang, Shih-Hau, Tseng, Wei-Han, Lee, Chung-Wei, and Hsieh, Jeng-Wei
- Subjects
HUMAN fingerprints ,WIRELESS LANs ,LOCATION-based services ,SUPPORT vector machines ,ALGORITHM research - Abstract
Access point (APs) selection approaches have been used in location fingerprinting systems to improve positioning accuracy and to reduce computational overhead. Although the interference between APs is unavoidable due to the overlapped channel, traditional methods treat APs individually by assuming independence among them. This paper proposes a novel group discriminant (GD)-based AP selection approach for improving location fingerprinting, in which the dependence between APs is considered. The proposed GD approach focuses on measuring the positioning capabilities of each group of APs instead of ranking APs based on their individual importance. It utilizes the risk function from support vector machines (SVMs) to estimate the GD value by maximizing the margin between reference locations. Moreover, this paper proposes a faster version, namely, recursive feature elimination (RFE-GD), to find a suboptimal solution of GD efficiently. This paper applies the proposed algorithms to realistic wireless local area networks (WLANs). Experimental results from two different test beds demonstrate that GD and RFE-GD outperform traditional AP selection schemes, reducing the mean localization error by 40.58%–41.13%. The experiments based on different fingerprinting approaches also confirm the advantages of the proposed algorithms. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
11. Clustering Data Streams Based on Shared Density between Micro-Clusters.
- Author
-
Hahsler, Michael and Bolaos, Matthew
- Subjects
DATA mining ,DOCUMENT clustering ,ALGORITHM research ,DATA distribution ,INFORMATION processing - Abstract
As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
12. Dynamic Grain State Estimation for High-Density TDMR: Progress and Future Directions.
- Author
-
Sun, Xueliang, Belzer, Benjamin J., and Sivakumar, Krishnamoorthy
- Subjects
DATA tapes ,ALGORITHM research ,MESSAGE passing (Computer science) ,DETECTORS ,COMPUTER science research - Abstract
Dynamic grain state estimation (DGSE) algorithms for 2-D magnetic recording (TDMR) employ probabilistic message-passing algorithms that jointly estimate magnetic grain configurations and coded data bits, in order to iteratively assist channel decoding. At high densities (e.g., between 1 and 3 magnetic grains per coded bit), occasionally, a bit will not be written on any grain, and hence will effectively be overwritten (or erased) by bits on surrounding grains. DGSE enables the detection of overwritten bits so that their log-likelihood ratios are assigned small magnitudes, effectively making them erasures, which are easily filled in by linear channel codes. Past papers employing Bahl-Cocke-Jelinek-Raviv-based detectors on a simple four-rectangular-grain model have shown that the DGSE is surprisingly resilient to mismatch between the detector’s assumed grain model and the actual model generating the data. This paper presents a novel DGSE–TDMR detector based on the generalized belief propagation (GBP) algorithm. The new detector employs a random discretized-nuclei Voronoi grain model. Simulation results show that the GBP-based TDMR turbo-detector accurately detects the overwritten bits and that it achieves low decoded bit error rates at densities as high as 0.4966 user bits per grain. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
13. A Study on Data Mining of Digital Display Performance of Brand Advertisement.
- Author
-
Guo, Meiwen
- Subjects
DATA mining ,DIGITAL media ,ADVERTISING ,ALGORITHMS ,BIG data ,GENETIC algorithms - Abstract
As the economy has entered a new normal, the pattern of economic development must shift from extensive to intensive and the core engine of this process is big data. The main question explored this study is the key influencing factors of brand digital display advertising in the background of big data. Based on the genetic algorithm, this paper built a key influencing factor model of brand digital display advertising. According to the difference factors that industry experts attach importance to, the paper made further application analysis. By setting up the key influencing factors model of digital display advertising in big data environment, we can inject practical viewpoints into existing models, improve the existing theoretical system and strengthen theoretical research’s practical exploration. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Design and Implementation of a System Security Experiment Platform Based on Cloud.
- Author
-
Li, Tao
- Subjects
SECURITY personnel ,INFORMATION technology security ,CLOUD computing security measures ,ALGORITHM research ,ALGORITHM software ,TRAINING - Abstract
The systematic security practice training is of great significance to the training of information security personnel. In this paper, the design and implementation of a cloud based system security experiment platform are studied. First of all, on the basis of the system security experiment platform, this paper constructs the system security detection algorithm and analyzes the selection strategy of the computing scale. Secondly, the author explains the implementation steps of the algorithm, and finally the algorithm is tested. The results are analyzed from three aspects: entropy value feature, active entropy algorithm and cloud platform performance. The conclusion is obtained that: the algorithm designed in this paper has good accuracy and adaptability, and it can play a good supporting role in system safety practice training. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. Mean-Square Stability Analysis of a Normalized Least Mean Fourth Algorithm for a Markov Plant.
- Author
-
Eweda
- Subjects
ADAPTIVE filters ,ALGORITHM research ,MEAN square algorithms ,MARKOV processes ,SIGNAL processing - Abstract
Recently, it has been shown that the stability of the least mean fourth (LMF) algorithm depends on the nonstationarity of the plant. The present paper investigates the possibility of overcoming this problem by normalization of the weight vector update term. A rigorous mean-square stability analysis is provided for a recent normalized LMF algorithm, which is normalized by a term that is second order in the estimation error and fourth order in the regressor. The analysis is done for a Markov plant with a stationary white input with even probability density function and a stationary zero-mean white noise. It is proved that the mean-square deviation (MSD) of the algorithm is bounded for all finite values of the input variance, noise variance, initial MSD, and mean-square plant parameter increment. Analytical results are supported by simulations. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
16. College Student Reader Model and Its Reading Recommendation Algorithm Based on Data Mining
- Author
-
Peng, Xin, Nie, Lixin, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Zhang, Yinjun, editor, and Shah, Nazir, editor
- Published
- 2024
- Full Text
- View/download PDF
17. Benchmarking Performance of a Hybrid Intel Xeon/Xeon Phi System for Parallel Computation of Similarity Measures Between Large Vectors.
- Author
-
Czarnul, Paweł
- Subjects
COMPUTING platforms ,ALGORITHM research ,HYBRID systems ,VECTOR valued functions ,CENTRAL processing units - Abstract
The paper deals with parallelization of computing similarity measures between large vectors. Such computations are important components within many applications and consequently are of high importance. Rather than focusing on optimization of the algorithm itself, assuming specific measures, the paper assumes a general scheme for finding similarity measures for all pairs of vectors and investigates optimizations for scalability in a hybrid Intel Xeon/Xeon Phi system. Hybrid systems including multicore CPUs and many-core compute devices such as Intel Xeon Phi allow parallelization of such computations using vectorization but require proper load balancing and optimization techniques. The proposed implementation uses C/OpenMP with the offload mode to Xeon Phi cards. Several results are presented: execution times for various partitioning parameters such as batch sizes of vectors being compared, impact of dynamic adjustment of batch size, overlapping computations and communication. Execution times for comparison of all pairs of vectors are presented as well as those for which similarity measures account for a predefined threshold. The latter makes load balancing more difficult and is used as a benchmark for the proposed optimizations. Results are presented for the native mode on an Intel Xeon Phi, CPU only and the CPU $$+$$ offload mode for a hybrid system with 2 Intel Xeons with 20 physical cores and 40 logical processors and 2 Intel Xeon Phis with a total of 120 physical cores and 480 logical processors. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. A New Representation of FFT Algorithms Using Triangular Matrices.
- Author
-
Garrido, Mario
- Subjects
FAST Fourier transforms ,ALGORITHM research ,TRIANGULARIZATION (Mathematics) ,FLOWGRAPHS ,COMPUTER architecture ,MATHEMATICAL models - Abstract
In this paper we propose a new representation for FFT algorithms called the triangular matrix representation. This representation is more general than the binary tree representation and, therefore, it introduces new FFT algorithms that were not discovered before. Furthermore, the new representation has the advantage that it is simple and easy to understand, as each FFT algorithm only consists of a triangular matrix. Besides, the new representation allows for obtaining the exact twiddle factor values in the FFT flow graph easily. This facilitates the design of FFT hardware architectures. As a result, the triangular matrix representation is an excellent alternative to represent FFT algorithms and it opens new possibilities in the exploration and understanding of the FFT. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
19. A Hermite interpolation model for reconstructing the frequency spectrum of the lightning horizontal electric field.
- Author
-
Zhang, Boyuan, Zou, Jun, Lee, Jaebok, and Ju, Mun-no
- Subjects
ELECTROMAGNETIC fields ,LIGHTNING research ,ALGORITHM research ,POLYNOMIALS ,INTERPOLATION - Abstract
Purpose – The purpose of this paper is to propose a fast algorithm to calculate the lightning horizontal electric field over a lossy ground. Design/methodology/approach – The lightning horizontal electric field in frequency domain is approximated by a number of piecewise cubic polynomial functions by using the proposed adaptive Hermite strategy. To utilize the Hermite strategy, the frequency domain spectrum and its derivative with respect to frequency are required. The integral kernel of the derivative appears singular along the real axis. To overcome this singularity and accelerate the calculation, a new integration path is proposed. With the help of the Hermite interpolation model and the new path, the lightning horizontal electric field in time domain can be obtained rapidly. Findings – The singularity problem has been overcome with the new integration path and the adaptive Hermite strategy proposed in this paper is at least 50 times faster than the one using the equally spaced sampling approach. Originality/value – The adaptive Hermite approach can be a good candidate for fitting a wideband frequency domain response and the revised new integration path can be utilized when the calculation of the generalized Sommerfeld integral or its derivative with respect to frequency is involved. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. A Status Graph Based Control Allocation Algorithm of Digital Micro-Thruster Array for Micro/Nano-Satellites Orbit Control Application.
- Author
-
ZHANG Dandan, ZHANG Yunyi, DONG Ke, LI Haiwang, and WANG Shaoping
- Subjects
NANOSATELLITES ,COLLOID thrusters ,MATHEMATICAL models ,ALGORITHM research ,REAL-time control - Abstract
Digital micro-thruster arrays can be used for special missions of micro/nano-satellites with the requirements of high precision and small impulse. This paper presents a novel control allocation algorithm for the digital micro - thruster array, namely status graph based control allocation(SGBCA)algorithm, which aims at finding the optimal micro thrusters combination scheme to realize the sequential control synthesis for micro/nano-satellite during real-time orbit control tasks. A mathematical model is set up for the control allocation of this multivariate over-actuated system. Through dividing thrusters into disjoint segments by offline calculation and combining segments dynamically online to provide a sequence of the required impulse for the micro/nano-satellite, the time complexity of the control allocation algorithm decreases significantly. All levels of impulse can be generated by the digital micro thruster arrays and the service life of the arrays can be extended using the segment converting strategy proposed in this paper. The simulation indicates that the algorithm can satisfy the requirements of real-time orbit control for micro/nano-satellites. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. The Design and Implementation of an Embedded Real-Time Automated IC Marking Inspection System.
- Author
-
Hsu, Fan-Hau and Shen, Chung-An
- Subjects
INTEGRATED circuits ,ALGORITHM research ,ALGORITHMIC randomness ,IMAGE processing ,DATA libraries - Abstract
This paper presents the system and algorithm designs of a real-time automated integrated circuit (IC) marking inspection system based on the embedded platform. Specifically, the system-level design and the integration of hardware and software components are illustrated in this paper. Furthermore, in order to enhance the accuracy of IC inspection, this paper presents a novel algorithm which integrates the classic template matching approach with an efficient angle estimation method, so that the rotation and location of the IC chip can be identified precisely. Formal outline of the proposed algorithm is given in this paper. Moreover, aiming at reducing the computation time, the algorithmic optimizations based on the multi-core embedded processor and the single instruction, multiple data architecture are presented. The experiment results show that, compared to the conventional IC marking inspection algorithm, the proposed system and algorithm greatly improve the efficiency and accuracy of image processing on the embedded system. In particular, when the size of the target image is $640\times480$ pixels and size of the template image is 80 $\times $ 100, the average inspection time is 31 ms, which is an approximately $20{\times }$ improvement from the conventional inspection scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Distributed Deployment Algorithms for Efficient Coverage in a Network of Mobile Sensors With Nonidentical Sensing Capabilities.
- Author
-
Mahboubi, Hamid, Moezzi, Kaveh, Aghdam, Amir G., and Sayrafian-Pour, Kamran
- Subjects
DISTRIBUTED algorithms ,VORONOI polygons ,SENSOR networks ,DETECTORS ,ALGORITHM research - Abstract
In this paper, efficient deployment algorithms are proposed for a mobile sensor network to improve the coverage area. The proposed algorithms find the target position of each sensor iteratively, based on the existing coverage holes in the network. The multiplicatively weighted Voronoi (MW-Voronoi) diagram is used to discover the coverage holes corresponding to different sensors with different sensing ranges. Three sensor deployment algorithms are provided, which tend to either move sensors out of densely packed areas or place them in proper positions with respect to the boundaries of the MW-Voronoi regions. Under the proposed procedures, the sensors move in such a way that the coverage holes in the target field are reduced. Simulations confirm the effectiveness of the deployment algorithms proposed in this paper. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
23. Accelerated methods for total tardiness minimisation in no-wait flowshops.
- Author
-
Ding, Jianya, Song, Shiji, Zhang, Rui, Gupta, Jatinder N.D., and Wu, Cheng
- Subjects
TARDINESS ,PERSONNEL management ,ALGORITHM research ,HEURISTIC algorithms ,JOB analysis ,OCCUPATIONS - Abstract
For the minimisation of total tardiness in no-wait flowshops, objective incremental properties are investigated in this paper to speed up the evaluation of candidate solutions. To explore the properties, we introduce a new concept of sensitive jobs and identify through experiments that the proportion of such jobs is very small. Instead of evaluating the tardiness of each job, we focus on the evaluation of sensitive jobs which will help to reduce the computational efforts. With these properties, the time complexity of the NEH-insertion procedure is reduced from to approximately in average. Then, an accelerated NEH algorithm and an accelerated iterated greedy algorithm are designed for the problem. Since the NEH-insertion procedure constitutes the main computational burden for both algorithms, these algorithms will benefit directly from the speedup. Numerical computations show that the accelerated algorithms perform 10–40 times faster than the original algorithms on the middle- and large-sized instances. In addition, comparisons show that the proposed algorithms perform more efficiently and effectively than the existing heuristics and meta-heuristics. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
24. Electric fish optimization for economic load dispatch problem.
- Author
-
Yıldız, Yağmur Arıkan, Akkaş, Özge Pınar, Saka, Mustafa, Çoban, Melih, and Eke, İbrahim
- Subjects
- *
ELECTRIC fishes , *ELECTRIC power , *ELECTRIC power systems , *ALGORITHM research , *GENERATORS (Computer programs) - Abstract
The Economic Load Dispatch (ELD) problem is an essential aspect of power system planning and operational scheduling. Different techniques and algorithms have been recommended to solve it, aiming to minimize the cost of power generation with satisfying the load requirements. In this paper, a new algorithm called Electric Fish Optimization (EFO) is used to solve the ELD problem by considering the line losses, ramp rate limits, maximum and minimum capacities of the generators and prohibited operating zones (POZ). The algorithm has been utilized in test systems consisting of 6 and 15 units and its outcomes have been compared to those from previous research studies. The proposed algorithm has been shown to achieve minimum cost, indicating its superiority and effectiveness in addressing power system planning challenges. It is evident that the presented algorithm offers a valuable solution for optimizing ELD problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Research on the dehazing effect of orthogonal polarization method based on atmospheric scattering model and considering extinction ratio parameter.
- Author
-
Shi, Dongdong, Huang, Fuyu, Jia, Leilei, Niu, Yuandong, Chen, Shuangyou, Jiao, Liting, Huang, Yanhua, and Liu, Limin
- Subjects
- *
ATMOSPHERIC models , *IMAGE reconstruction , *LAPLACIAN operator , *ORTHOGRAPHIC projection , *RESEARCH personnel , *IMAGE intensifiers - Abstract
The hazy weather is currently one of the most common atmospheric environments. It is an urgent problem to realize clear imaging in hazy environment. Orthogonal polarization dehazing is one of the classical dehazing techniques attracting a widespread attention from research workers. However, while using this method, the extinction ratio (ER) parameter of the polarizer was neglected leading to poor dehazing. Therefore, we have introduced the ER of polarizers into the orthogonal polarization dehazing technique in this paper. In the process of sky region division, this study proposes to use HSV, RGB and Lab colour channels to create a comprehensive mask to select the sky region. And the morphological corrosion operation is utilized to increase the region connectivity for better sky region division. Based on subjective visualization, it can be found that the dehazing effect is significantly enhanced with the introduction of ER. The dehazing effect was evaluated based on four parameters of contrast (C0), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and information entropy (En). It is further demonstrated the necessity of introducing ER. The experimental results show that the ER can make the dehazing effect of the orthogonal polarization dehazing technique improved, which is valuable for the engineering application of image processing in hazy environment. The experimental results proved to be encouraging. The main highlights of this work are the following three points: The NDC parameter of the polarizer is a crucial factor that many researchers tend to overlook when utilizing polarizers for detection in the polarization dehazing process. This study aims to address this oversight by incorporating polarizers for detection and introducing the NDC parameter to rectify images at varying polarization angles. Consequently, the polarization dehazing operation yields superior results compared to those achieved without NDC polarization dehazing technology. The separation of the sky area from the non-sky area is achieved through the utilization of a multi-channel mask. This model employs multi-channel processing, specifically the HSV colour channel, RGB colour channel, and Lab colour channel. By generating an appropriate mask to delineate the sky area and the non-sky area, merging the three masks into a comprehensive mask, and subsequently applying morphological etching on the comprehensive mask to enhance the connectivity of the sky area, it is observed that the effectiveness of separating the sky area from the non-sky area is significantly improved. The amalgamation of image restoration and image enhancement is a critical process. Once the polarization dehazing operation is accomplished, we proceed with an image enhancement operation on the restored image. To regulate the filtering level, we employ the standard deviation of the Gaussian function. Furthermore, we utilize the Laplacian operator to intensify the edges of the current channel. By adjusting the parameters, we can control the intensity and scope of the enhancement effect, thereby achieving edge enhancement and augmenting the details of the image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. LA DISCRIMINACIÓN ALGORÍTMICA POR RAZÓN DE DISCAPACIDAD.
- Author
-
Álvarez García, Héctor
- Subjects
ARTIFICIAL intelligence ,SOCIAL groups ,DIGITAL technology ,PEOPLE with disabilities ,DISCRIMINATION (Sociology) ,ALGORITHM research ,GROUP formation ,VIOLENCE - Abstract
Copyright of Revista Internacional de Pensamiento Político is the property of Revista Internacional de Pensamiento Politico - Universidad Pablo de Olavide and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
27. A New Model-Based Rotation and Scaling-Invariant Projection Algorithm for Industrial Automation Application.
- Author
-
Shih, Huang-Chia and Yu, Kuan-Chun
- Subjects
ALGORITHM research ,COMPUTER vision ,ROBOTICS ,AUTONOMOUS vehicles ,GRAPHICAL projection - Abstract
This paper describes a simple approach for model-based template matching, which is robust to undergo rotation and scaling variations. An efficient image warping scheme spiral aggregation image (SAI), which has been utilized in this paper, provides a method for generating projection profiles for matching. In addition, it determines the rotation angle and is invariant to scale changes. The proposed spiral projection algorithm (SPA) for template matching enables the simultaneous representation for each value of projection profile, obtained through SAI, and provides structural and statistical information on the template. The experimental evaluation shows that the proposed SPA achieves very attractive results for template matching in the industrial automation application. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
28. Evaluation of SPMA and higher order sectorization for homogeneous SIR through macro sites.
- Author
-
Sheikh, Muhammad, Säe, Joonas, and Lempiäinen, Jukka
- Subjects
WIRELESS communications ,MULTIPLE access protocols (Computer network protocols) ,ALGORITHM research ,RAY tracing ,INTERFERENCE (Telecommunication) - Abstract
This paper highlights the performance of single path multiple access (SPMA) and discusses the performance comparison between higher order sectorization and SPMA in a macrocellular environment. The target of this paper is to emphasize the gains and significance of the novel concept of SPMA in achieving better and homogeneous SIR and enhanced system capacity in a macrocellular environment. This paper also explains the algorithm of SIR computation in SPMA. The results presented in this paper are based on sophisticated 3D ray tracing simulations performed with real world 3D building data and site locations from Seoul, South Korea. Macrocellular environment dominated with indoor users was considered for the research purpose of this paper. It is found that by increasing the order of sectorization, SIR along with spectral efficiency degrades due to the increase in inter-cell interference. However, as a result of better area spectral efficiency due to increased number of sectors (cells), the higher order sectorization offers more system capacity compared to the traditional 3-sector site. Furthermore, SPMA shows an outstanding performance and significantly improves the SIR for the individual user over the whole coverage area, and also remarkably increases the system capacity. In the environment under consideration, the simulation results reveal that SPMA can offer approximately 424 times more system capacity compared to the reference case of 3-sector site. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. Processing of Very High Resolution Spaceborne Sliding Spotlight SAR Data Using Velocity Scaling.
- Author
-
Wu, Yuan, Sun, Guang-Cai, Yang, Chun, Yang, Jun, Xing, Mengdao, and Bao, Zheng
- Subjects
SYNTHETIC aperture radar ,ALGORITHM research ,AZIMUTH ,HYPERBOLIC functions ,OPTICAL resolution - Abstract
In spaceborne synthetic aperture radar, the sliding spotlight mode can acquire high resolution and large azimuth scene size simultaneously. However, when the resolution is very high and the azimuth scene size is large, the traditional hyperbolic range model (HRM) is inaccurate and the variation of the equivalent velocity in azimuth dimension cannot be ignored. Thus, the traditional imaging algorithms based on HRM are no longer available. For this problem, this paper proposes an equivalent acceleration range model, which can precisely take into account the spaceborne curved orbit. Then, velocity scaling algorithm based on this new range model is proposed to meet the needs of very high resolution and large azimuth scene size. The results of the simulation validate the effectiveness of the new range model and the imaging algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
30. Time-Varying Parameter Adaptive Vehicle Speed Control.
- Author
-
Kim, Hakgu, Kim, Dongwook, Shu, Insoo, and Yi, Kyongsu
- Subjects
TIME-varying systems ,AUTOMOBILE speed ,AUTOMOBILE brakes ,AUTOMATIC systems in automobiles ,ALGORITHM research - Abstract
This paper presents a novel approach for time-varying parameter adaptive throttle and brake control for vehicle speed tracking. A control algorithm has been developed based on a linearized longitudinal vehicle model with characteristic lumped parameters. The lumped parameters are slowly time varying, except when a vehicle experiences gear shift. Combined parameter adaptation and throttle/brake control algorithm have been developed. The performance of the proposed control algorithm has been evaluated via simulations and vehicle tests. Since the proposed control algorithm has been designed using a generic form of the vehicle model, it can be implemented for different classes of vehicles with no information about the vehicle powertrain and the brake system. It has been shown from both simulations and vehicle tests that the vehicle speed tracking performance is robust to external disturbance. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
31. Vehicle-to-Vehicle Forwarding in Green Roadside Infrastructure.
- Author
-
Azimifar, Morteza, Todd, Terence D., Khezrian, Amir, and Karakostas, George
- Subjects
VEHICULAR ad hoc networks ,FLOWGRAPHS ,ALGORITHM research ,SCHEDULING ,MATHEMATICAL models ,INTELLIGENT transportation systems - Abstract
Smart scheduling can be used to reduce infrastructure-to-vehicle energy costs in delay-tolerant vehicular networks. In this paper, we show that, by combining this with vehicle-to-vehicle (V2V) forwarding, downlink (DL) traffic schedules can be generated, whose energy costs are lower than that in the single-hop case. This is accomplished by having the roadside units (RSUs) dynamically forward packets through vehicles, which are in energy-favorable locations. This paper considers both constant bit rate (CBR) and variable bit rate (VBR) air-interface options. We first derive offline schedulers for the DL RSU energy usage when V2V forwarding is added to RSU-to-vehicle communication. Both in-channel and off-channel forwarding cases are considered. The CBR and VBR cases are obtained using integer linear programming (ILP) and time-expanded graph (TEG) formulations, respectively. These schedulers provide lower bounds on energy performance and are used for comparisons with a variety of proposed online scheduling algorithms. The first algorithm is based on a greedy local optimization (GLOA). A version of this algorithm, which uses a minimum-cost flow graph (MCFG) scheduler, is also introduced. A more sophisticated algorithm is then proposed, which is based on a finite-window group optimization (FWGO). Results from various experiments show that the proposed algorithms can generate traffic schedules with much improved DL energy requirements compared with the case where V2V packet forwarding is not used. The performance improvements are particularly strong when under heavy loading conditions and when the variation in vehicle communication requirements or vehicle speed is high. Results that compare the proposed algorithms with conventional nonenergy-aware schedulers are also presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
32. Dual-Resolution Friend Locator System With Privacy Enhancement Through Polygon Decomposition.
- Author
-
Zan, Bin, Hu, Fei, Bao, Ke, and Hao, Qi
- Subjects
ONLINE social networks research ,RIGHT of privacy ,SYSTEMS design ,MOBILE communication systems ,ALGORITHM research - Abstract
Online social networks have become increasingly popular. One of the interesting applications is the friend locator, in which the application server informs a user through a mobile device if some of his/her listed friends are close enough in terms of geographical locations. However, in such services, it is challenging to protect the privacy of the individual users. Previous solutions for the friend locator do not guarantee a high level of privacy and do not maintain efficiency. In this paper, we propose a dual-resolution system structure to guarantee both strong privacy and efficiency. Additionally, we use the polygon decomposition method to achieve both accuracy and flexibility. To be more specific, in the coarse resolution level, each regular mobile user uploads his/her encrypted coarse location information to a central server periodically for comparisons. If a regular mobile user is found to be in the same grid block as an active mobile user, then the friend locator procedure with a higher resolution level will be conducted. Finally, through numerical analysis and simulations, we show that the proposed system design and algorithm can achieve high privacy, efficiency, accuracy, and flexibility. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
33. Imbalanced Data Processing Model for Software Defect Prediction.
- Author
-
Zhou, Lijuan, Li, Ran, Zhang, Shudong, and Wang, Hua
- Subjects
SOFTWARE engineering ,DEVIATION (Statistics) ,ERROR analysis in mathematics ,ALGORITHM software ,ALGORITHM research - Abstract
In the field of software engineering, software defect prediction is the hotspot of the researches which can effectively guarantee the quality during software development. However, the problem of class imbalanced datasets will affect the accuracy of overall classification of software defect prediction, which is the key issue to be solved urgently today. In order to better solve this problem, this paper proposes a model named ASRA which combines attribute selection, sampling technologies and ensemble algorithm. The model adopts the Chi square test of attribute selection and then utilizes the combined sampling technique which includes SMOTE over-sampling and under-sampling to remove the redundant attributes and make the datasets balance. Afterwards, the model ASRA is eventually established by ensemble algorithm named Adaboost with basic classifier J48 decision tree. The data used in the experiments comes from UCI datasets. It can draw the conclusion that the effect of software defect prediction classification which using this model is improved and better than before by comparing the precision P, F-measure and AUC values from the results of the experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. A novel POMDP-based server RAM caching algorithm for VoD systems.
- Author
-
Yin, Baoqun, Cao, Jie, Kang, Yu, Lu, Xiaonong, and Jiang, Xiaofeng
- Subjects
VIDEO on demand ,ALGORITHM research ,RANDOM access memory -- Design & construction ,PARTIALLY observable Markov decision processes ,DATA analysis ,COMPUTER software - Abstract
In contrast to other services on the Internet, streaming media service needs to fetch data from local disks more frequently, since it always lasts long and the bit rate is quite high. In addition, because of the much slower reading/writing speed of disk than random access memory (RAM), adopting advisable RAM caching policy can efficiently reduce disk I/O. In this paper, we study the problem of reducing disk I/O by using a novel approach. We first provide a new popularity estimate algorithm. Then a formal optimization problem about average disk I/O is presented, and a suboptimal caching algorithm for a special case of the problem is given. Furthermore, a partially observable Markov decision process (POMDP) model is constructed for the caching problem. Based on the model, popularity is taken advantage of to predict clients’ randomized behaviors, data replacing decisions are made when the defined observations occur, and the impact of caching actions on disk performance for future infinite steps is assessed. The method of event-based optimization is applied in search of the optimal stochastic policy. Disk I/O, as the long-run average performance measure, is optimized by applying the policy-gradient algorithm. The simulation results illustrate that data required by clients could be better predicted and lower disk I/O could be achieved by using the model proposed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. Empirical Centroid Fictitious Play: An Approach for Distributed Learning in Multi-Agent Games.
- Author
-
Swenson, Brian, Kar, Soummya, and Xavier, Joao
- Subjects
LEARNING ,NASH equilibrium ,ALGORITHM research ,GAME theory ,GRAPH theory - Abstract
The paper is concerned with distributed learning in large-scale games. The well-known fictitious play (FP) algorithm is addressed, which, despite theoretical convergence results, might be impractical to implement in large-scale settings due to intense computation and communication requirements. An adaptation of the FP algorithm, designated as the empirical centroid fictitious play (ECFP), is presented. In ECFP players respond to the centroid of all players’ actions rather than track and respond to the individual actions of every player. Convergence of the ECFP algorithm in terms of average empirical frequency (a notion made precise in the paper) to a subset of the Nash equilibria is proven under the assumption that the game is a potential game with permutation invariant potential function. A more general formulation of ECFP is then given (which subsumes FP as a special case) and convergence results are given for the class of potential games. Furthermore, a distributed formulation of the ECFP algorithm is presented, in which, players endowed with a (possibly sparse) preassigned communication graph, engage in local, non-strategic information exchange to eventually agree on a common equilibrium. Convergence results are proven for the distributed ECFP algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
36. Submodular Relaxation for Inference in Markov Random Fields.
- Author
-
Osokin, Anton and Vetrov, Dmitry P.
- Subjects
MARKOV random fields ,LAGRANGIAN functions ,RELAXATION methods (Mathematics) ,ALGORITHM research - Abstract
In this paper we address the problem of finding the most probable state of a discrete Markov random field (MRF), also known as the MRF energy minimization problem. The task is known to be NP-hard in general and its practical importance motivates numerous approximate algorithms. We propose a submodular relaxation approach (SMR) based on a Lagrangian relaxation of the initial problem. Unlike the dual decomposition approach of Komodakis et al.
[29] SMR does not decompose the graph structure of the initial problem but constructs a submodular energy that is minimized within the Lagrangian relaxation. Our approach is applicable to both pairwise and high-order MRFs and allows to take into account global potentials of certain types. We study theoretical properties of the proposed approach and evaluate it experimentally. [ABSTRACT FROM PUBLISHER]- Published
- 2015
- Full Text
- View/download PDF
37. A New Soft-Sensor-Based Process Monitoring Scheme Incorporating Infrequent KPI Measurements.
- Author
-
Shardt, Yuri A. W., Hao, Haiyang, and Ding, Steven X.
- Subjects
KEY performance indicators (Management) ,DETECTORS ,MONTE Carlo method ,STATISTICAL sampling ,ALGORITHM research - Abstract
The development of advanced techniques for process monitoring and fault diagnosis using both model-based and data-driven approaches has led to many practical applications. One issue that has not been considered in such applications is the ability to deal with key performance indicators (KPIs) that are only sporadically measured and with significant time delay. Therefore, in this paper, the data-driven design of diagnostic-observer-based process monitoring schemes is extended to include the ability to detect changes given infrequently measured KPIs. The extended diagnostic observer is shown to be stable and hence able to converge to the true value. The proposed method is tested using both Monte Carlo simulations and the Tennessee-Eastman problem. It is shown that although time delay and sampling time increase the detection delay, the overall effect can be mitigated by using a soft sensor. Furthermore, it is shown that the results are not strongly dependent on the sampling time, but do depend on the time delay. Therefore, the proposed soft-sensor-based monitoring scheme can efficiently detect faults even in the absence of direct process information. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
38. A Multi-objective Ant Colony Optimization Algorithm Based on Elitist Selection Strategy.
- Author
-
Xiangui Shi and Dekui Kong
- Subjects
MATHEMATICAL optimization ,ALGORITHM research ,ANT algorithms - Abstract
Multi-objective optimization problem is a kind of common optimization problem in science and engineering. This paper explores the improvement strategy of multi-objective ant colony algorithm and proposes an Elitist Multi-objective Ant Colony Optimization (EMOACO). This method proposes to improve ant colony fitness based on Pareto non-dominated set, performs local search on every individual generated in the ant colony algorithm and accelerates the parallel search of multiple objectives by adopting elite selection strategy in order to increase its search rate. The experimental result shows that the algorithm of this paper is effective and that it makes some improvements in global optimization capacity and population diversity compared with the basic multi-objective ant colony algorithm, it can quickly converge to Pareto optimal solution and provide a reliable basis for the decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2015
39. Biomarker Discovery Based on Hybrid Optimization Algorithm and Artificial Neural Networks on Microarray Data for Cancer Classification.
- Author
-
Moteghaed, Niloofar Yousefi, Maghooli, Keivan, Pirhadi, Shiva, and Garshasbi, Masoud
- Subjects
ARTIFICIAL neural networks ,BIOMARKERS ,GENE expression ,GENETIC algorithms ,PARTICLE swarm optimization ,ALGORITHM research - Abstract
The improvement of high-through-put gene profiling based microarrays technology has provided monitoring the expression value of thousands of genes simultaneously. Detailed examination of changes in expression levels of genes can help physicians to have efficient diagnosing, classification of tumors and cancer's types as well as effective treatments. Finding genes that can classify the group of cancers correctly based on hybrid optimization algorithms is the main purpose of this paper. In this paper, a hybrid particle swarm optimization and genetic algorithm method are used for gene selection and also artificial neural network (ANN) is adopted as the classifier. In this work, we have improved the ability of the algorithm for the classification problem by finding small group of biomarkers and also best parameters of the classifier. The proposed approach is tested on three benchmark gene expression data sets: Blood (acute myeloid leukemia, acute lymphoblastic leukemia), colon and breast datasets. We used 10-fold cross-validation to achieve accuracy and also decision tree algorithm to find the relation between the biomarkers for biological point of view. To test the ability of the trained ANN models to categorize the cancers, we analyzed additional blinded samples that were not previously used for the training procedure. Experimental results show that the proposed method can reduce the dimension of the data set and confirm the most informative gene subset and improve classification accuracy with best parameters based on datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2015
40. NTUplace4h: A Novel Routability-Driven Placement Algorithm for Hierarchical Mixed-Size Circuit Designs.
- Author
-
Hsu, Meng-Kai, Chen, Yi-Fang, Huang, Chau-Chin, Chou, Sheng, Lin, Tzu-Hen, Chen, Tung-Chieh, and Chang, Yao-Wen
- Subjects
INTEGRATED circuit design ,ROUTING algorithms ,INTEGRATED circuits ,ALGORITHM research ,CONTESTS - Abstract
A wirelength-driven placer without considering routability could introduce irresolvable routing-congested placements. Therefore, it is desirable to develop an effective routability-driven placer for modern mixed-size designs employing hierarchical methodologies for faster turnaround time. In this paper, we propose a novel routability-driven analytical placement algorithm for hierarchical mixed-size circuit designs. This paper presents a novel design hierarchy identification technique to effectively identify design hierarchies and guide placement for better wirelength and routability. The proposed algorithm optimizes routability from four major aspects: 1) narrow channel handling; 2) pin density; 3) routing overflow optimization; and 4) net congestion optimization. Routability-driven legalization and detailed placement are also proposed to further optimize routing congestion. Compared with the participating teams for the 2012 ICCAD Design Hierarchy Aware Routability-driven Placement Contest, our placer can achieve the best quality (both the average overflow and wirelength) and the best overall score (by additionally considering running time). [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
41. Development of Autonomous Car—Part I: Distributed System Architecture and Development Process.
- Author
-
Jo, Kichun, Kim, Junsoo, Kim, Dongchul, Jang, Chulhoon, and Sunwoo, Myoungho
- Subjects
AUTOMOBILE research ,ALGORITHM research ,ACTUATORS ,COMPUTER systems ,FAULT tolerance (Engineering) - Abstract
An autonomous car is a self-driving vehicle that has the capability to perceive the surrounding environment and navigate itself without human intervention. For autonomous driving, complex autonomous driving algorithms, including perception, localization, planning, and control, are required with many heterogeneous sensors, actuators, and computers. To manage the complexity of the driving algorithms and the heterogeneity of the system components, this paper applies distributed system architecture to the autonomous driving system, and proposes a development process and a system platform for the distributed system of an autonomous car. The development process provides the guidelines to design and develop the distributed system of an autonomous vehicle. For the heterogeneous computing system of the distributed system, a system platform is presented, which provides a common development environment by minimizing the dependence between the software and the computing hardware. A time-triggered network protocol, FlexRay, is applied as the main network of the software platform to improve the network bandwidth, fault tolerance, and system performance. Part II of this paper will provide the evaluation of the development process and system platform by using an autonomous car, which has the ability to drive in an urban area. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
42. Underwater Acoustic Sensor Networks Deployment Using Improved Self-Organize Map Algorithm.
- Author
-
Hua, Cheng Bing, Wei, Zhao, and Zi Nan, Chang
- Subjects
ALGORITHM research ,GIBBS sampling ,WIRELESS sensor nodes ,WIRELESS sensor networks ,ENERGY consumption management - Abstract
The traditional Self-Organize Map (SOM) method is used for the arrangement of seabed nodes in this paper. If the distance between the nodes and the events is long, these nodes cannot be victory nodes and they will be abandoned, because they cannot move to the direction of events, and as a result they are not being fully utilized and are destroying the balance of energy consumption in the network. Aiming at this problem, this paper proposes an improved self-organize map algorithm with the introduction of the probability-selection mechanism in Gibbs sampling to select victory nodes, thus optimizing the selection strategy for victory nodes. The simulation results show that the Improved Self-Organize Map (ISOM) algorithm can balance the energy consumption in the network and prolong the network lifetime. Compared with the traditional self-organize map algorithm, the adopting of the improved self-organize map algorithm can make the event driven coverage rate increase about 3%. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
43. A Multiple Migration and Stacking Algorithm Designed for Land Mine Detection.
- Author
-
Schofield, John, Daniels, David, and Hammerton, Paul
- Subjects
LAND mine detection ,ALGORITHM research ,GROUND penetrating radar ,ANTENNAS (Electronics) ,OPTICAL apertures - Abstract
This paper describes a modification to a standard migration algorithm for land mine detection with a ground-penetrating radar (GPR) system. High directivity from the antenna requires a significantly large aperture in relation to the operating wavelength, but at the frequencies of operation of GPR, this would result in a large and impractical antenna. For operator convenience, most GPR antennas are small and exhibit low directivity and a wide beamwidth. This causes the GPR image to bear little resemblance to the actual target scattering centers. Migration algorithms attempt to reduce this effect by focusing the scattered energy from the source reflector and consequentially improve the target detection rate. However, problems occur due to the varying operational conditions, which result in the migration algorithm requiring vastly different calibration parameters. In order to combat this effect, this migration scheme stacks multiple versions of the same migrated data with different velocity values, whereas some other migration schemes only use a single velocity value. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
44. Building Volumetric Appearance Models of Fabric Using Micro CT Imaging.
- Author
-
Shuang Zhao, Jakob, Wenzel, Marschner, Steve, and Bala, Kavita
- Subjects
TEXTILES ,ALGORITHM research ,COMPUTER graphics research ,COMPUTED tomography ,COMPUTER science research ,THREE-dimensional textiles - Abstract
Cloth is essential to our everyday lives; consequently, visualizing and rendering cloth has been an important area of research in graphics for decades. One important aspect contributing to the rich appearance of cloth is its complex 3D structure. Volumetric algorithms that model this 3D structure can correctly simulate the interaction of light with cloth to produce highly realistic images of cloth. But creating volumetric models of cloth is difficult: writing specialized procedures for each type of material is onerous, and requires significant programmer effort and intuition. Further, the resulting models look unrealistically “perfect” because they lack visually important features like naturally occurring irregularities. This paper proposes a new approach to acquiring volume models, based on density data from X-ray computed tomography (CT) scans and appearance data from photographs under uncontrolled illumination. To model a material, a CT scan is made, yielding a scalar density volume. This 3D data has micron resolution details about the structure of cloth but lacks all optical information. So we combine this density data with a reference photograph of the cloth sample to infer its optical properties. We show that this approach can easily produce volume appearance models with extreme detail, and at larger scales the distinctive textures and highlights of a range of very different fabrics such as satin and velvet emerge automatically—all based simply on having accurate mesoscale geometry. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
45. Robust transceiver design for MIMO relay systems with direct link under imperfect channel state information.
- Author
-
Chen, Xiaomin, Fang, Zhu, Zhu, Qiuming, and Hu, Xujun
- Subjects
RADIO transmitter-receivers ,TERMINALS (Transportation) ,ALGORITHM research ,DESIGN ,ERRORS - Abstract
In this paper, the robust transceiver design for multiple-input-multiple-output relay systems with direct link is investigated in the presence of imperfect channel state information. Constrained by maximum power at transmitting and relay terminal, a joint optimised algorithm is proposed based on the minimum mean-squared error rule. Specifically, given the solution of the linear processing matrix of the receiver, the constrained optimisation problem can be transformed into two sub-convex optimisation problems with respect to object variables, and the precoding matrix can then be obtained by using projected gradient method and CVX toolbox. Finally, an alternating algorithm is proposed for the joint design, and simulation results indicated that the proposed design scheme can achieve better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Multiphysical approach including equivalent circuit models for the sizing by optimization.
- Author
-
Baraston, Arnaud, Gerbaud, Laurent, Reinbold, Vincent, Boussey, Thomas, and Wurtz, Frédéric
- Subjects
EQUIVALENT electric circuits ,MATHEMATICAL models ,MATHEMATICAL optimization ,ALGORITHM research ,PERMANENT magnet motors ,SYNCHRONOUS electric motors - Abstract
Purpose – Multiphysical models are often useful for the design of electrical devices such as electrical machines. In this way, the modelling of thermal, magnetic and electrical phenomena by using an equivalent circuit approach is often used in sizing problems. The coupling of such models with other models is difficult to take into account, partly because it adds complexity to the process. The purpose of this paper is to propose an automatic modelling of thermal and magnetic aspects from an equivalent circuit approach, with its computation of gradients, using selectivity on the variables. Then, it discusses the coupling of various physical models, for the sizing by optimization algorithms. Sensibility analyses are discussed and the multiphysical approach is applied on a permanent magnet synchronous machine. Design/methodology/approach – The paper allows one to describe thermal and magnetic models by equivalent circuits. Magnetic aspects are represented by reluctance networks and thermal aspects by thermal equivalent circuits. From circuit modelling and analytical equations, models are generated, coupled and translated into computational codes (Java, C), including the computation of their Jacobians. To do so, model generators are used: CADES, Reluctool, Thermotool. The paper illustrates the modelling and automatic programming aspects with Thermotool. The generated codes are directly available for optimization algorithms. Then, the formulation of the coupling with other models is studied in the case of a multiphysical sizing by optimization of the Toyota PRIUS electrical motor. Findings – A main specificity of the approach is the ability to easily deal with the selectivity of the inputs and outputs of the generated model according to the problem specifications, thus reducing drastically the size of the Jacobian matrix and the computational complexity. Another specificity is the coupling of the models using analytical equations, possibly implicit equations. Research limitations/implications – At the present time, the multiphysical modelling is considered only for static phenomena. However, this limit is not important for numerous sizing applications. Originality/value – The analytical approach with the selectivity gives fast models, well adapted for optimization. The use of model generators allows robust programming of the models and their Jacobians. The automatic calculation of the gradients allows the use of determinist algorithms, such as sequential quadratic programming, well adapted to deal with numerous constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
47. Mono and bi-level optimization architectures for powertrain design.
- Author
-
Caillard, Pierre, Gillon, Frederic, Randi, Sid-Ali, and Janiaud, Noelle
- Subjects
AUTOMOBILE power trains ,ELECTRIC automobiles research ,ALGORITHM research ,MATHEMATICAL optimization ,CONTROL theory (Engineering) - Abstract
Purpose – The purpose of this paper is to compare two design optimization architectures for the optimal design of a complex device that integrates simultaneously the sizing of system components and the control strategy for increasing the energetic performances. The considered benchmark is a battery electric passenger car. Design/methodology/approach – The optimal design of an electric vehicle powertrain is addressed within this paper, with regards to performances and range. The objectives and constraints require simulating several vehicle operating points, each of them has one degree of freedom for the electric machine control. This control is usually determined separately for each point with a sampling or an optimization loop resulting in an architecture called bi-level. In some conditions, the control variables can be transferred to the design optimization loop by suppressing the inner loop to get a mono-level formulation. The paper describes in which conditions this transformation can be done and compares the results for both architectures. Findings – Results show a calculation time divided by more than 30 for the mono-level architecture compared to the natural bi-level on the study case. Even with the same models and optimization algorithms, the structure of the problem should be studied to improve the results, especially if computational cost is high. Originality/value – The compared architectures bring new guidelines in the field optimal design for electric powertrains. The way to formulate a design optimization with some inner degrees of freedom can have a significant impact on computing time and on the problem understanding. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. A Scalable Successive-Cancellation Decoder for Polar Codes.
- Author
-
Raymond, Alexandre J. and Gross, Warren. J.
- Subjects
CODING theory ,ALGORITHM research ,MATHEMATICAL optimization ,STATIC random access memory ,SIGNAL processing - Abstract
Polar codes are the first error-correcting codes to provably achieve channel capacity, asymptotically in code length, with an explicit construction. However, under successive-cancellation decoding, polar codes require very long code lengths to compete with existing modern codes. Nonetheless, the successive cancellation algorithm enables very-low-complexity implementations in hardware, due to the regular structure exhibited by polar codes. In this paper, we present an improved architecture for successive-cancellation decoding of polar codes, making use of a novel semi-parallel, encoder-based partial-sum computation module. We also provide quantization results for realistic code length N=2^15, and explore various optimization techniques such as a chained processing element and a variable quantization scheme. This design is shown to scale to code lengths of up to N=2^21, enabled by its low logic use, low register use and simple datapaths, limited almost exclusively by the amount of available SRAM. It also supports an overlapped loading of frames, allowing full-throughput decoding with a single set of input buffers. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
49. Yield-Aware Pareto Front Extraction for Discrete Hierarchical Optimization of Analog Circuits.
- Author
-
Jung, Seobin, Lee, Jiho, and Kim, Jaeha
- Subjects
ANALOG circuits ,PARETO optimum ,ALGORITHM research ,ANALOG integrated circuits ,ELECTRIC circuits - Abstract
This paper presents an efficient method for extracting a yield-aware Pareto front between two competing metrics of an analog circuit block, with the purpose of performing hierarchical, system-level optimization using the component-level Pareto fronts as meta-models. The proposed method consists of three steps: finding a set of Pareto-optimal design points by tracing them on a discrete grid, estimating the yield distribution of each optimal design point using a control-variate technique, and constructing a yield-aware Pareto front by interpolation. The proposed algorithm is demonstrated on a problem of finding the optimal power allocation among the components composing a clock recovery path to minimize the final clock jitter. The algorithm can estimate the Pareto front of each circuit block within a 2% error, expressing the minimum achievable jitter with 99% yield for different power budgets, while requiring only \(600~\sim ~1100\) Monte-Carlo simulation samples in total. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
50. Vehicle Behavior Analysis Using Target Motion Trajectories.
- Author
-
Song, Huan-Sheng, Lu, Sheng-Nan, Ma, Xiang, Yang, Yuan, Liu, Xue-Qin, and Zhang, Peng
- Subjects
TRAFFIC monitoring ,IMAGE segmentation ,ALGORITHM research ,TRAFFIC engineering ,WEATHER - Abstract
In this paper, a real-time vehicle behavior analysis system is presented, which can be used in traffic jams and under complex weather conditions. In recent years, many works based on background estimation and foreground extraction for traffic event detection have been reported. In these studies, the vehicle images need to be accurately segmented, although uneven illumination, shadows, and vehicle overlapping are difficult to handle. The main contribution of this paper is to make a point tracking system for vehicle behavior analysis without a difficult image segmentation procedure. In the proposed system, feature points are extracted using an improved Moravec algorithm. A specially designed template is used to track the feature points through the image sequences. Then, trajectories of feature points can be obtained, whereas unqualified track trajectories are removed using decision rules. Finally, the vehicle behavior analysis algorithms are applied on the track trajectories for traffic event detection. The proposed system has been used widely by Chinese highway management departments. The application performances show that the newly developed system and its algorithms are robust enough for vehicle behavior analysis under complex weather conditions. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.