112 results
Search Results
2. A rough set-based measurement model study on high-speed railway safety operation.
- Author
-
Hu, Qizhou, Tan, Minjia, Lu, Huapu, and Zhu, Yun
- Subjects
- *
RAILROADS , *RAILROAD safety measures , *SET theory , *DATA analysis , *RELIABILITY in engineering - Abstract
Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method’s, which verifies the feasibility and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. A novel multi-item joint replenishment problem considering multiple type discounts.
- Author
-
Cui, Ligang, Zhang, Yajun, Deng, Jie, and Xu, Maozeng
- Subjects
- *
PRODUCTION scheduling , *DISCOUNT prices , *ECONOMIC decision making , *HEURISTIC algorithms , *SOCIAL problems - Abstract
In business replenishment, discount offers of multi-item may either provide different discount schedules with a single discount type, or provide schedules with multiple discount types. The paper investigates the joint effects of multiple discount schemes on the decisions of multi-item joint replenishment. In this paper, a joint replenishment problem (JRP) model, considering three discount (all-unit discount, incremental discount, total volume discount) offers simultaneously, is constructed to determine the basic cycle time and joint replenishment frequencies of multi-item. To solve the proposed problem, a heuristic algorithm is proposed to find the optimal solutions and the corresponding total cost of the JRP model. Numerical experiment is performed to test the algorithm and the computational results of JRPs under different discount combinations show different significance in the replenishment cost reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. Use of evidential reasoning and AHP to assess regional industrial safety.
- Author
-
Chen, Zhichao, Chen, Tao, Qu, Zhuohua, Yang, Zaili, Ji, Xuewei, Zhou, Yi, and Zhang, Hui
- Subjects
- *
INDUSTRIAL safety , *URBANIZATION , *RISK assessment , *ANALYTIC hierarchy process , *FEASIBILITY studies - Abstract
China’s fast economic growth contributes to the rapid development of its urbanization process, and also renders a series of industrial accidents, which often cause loss of life, damage to property and environment, thus requiring the associated risk analysis and safety control measures to be implemented in advance. However, incompleteness of historical failure data before the occurrence of accidents makes it difficult to use traditional risk analysis approaches such as probabilistic risk analysis in many cases. This paper aims to develop a new methodology capable of assessing regional industrial safety (RIS) in an uncertain environment. A hierarchical structure for modelling the risks influencing RIS is first constructed. The hybrid of evidential reasoning (ER) and Analytical Hierarchy Process (AHP) is then used to assess the risks in a complementary way, in which AHP is hired to evaluate the weight of each risk factor and ER is employed to synthesise the safety evaluations of the investigated region(s) against the risk factors from the bottom to the top level in the hierarchy. The successful application of the hybrid approach in a real case analysis of RIS in several major districts of Beijing (capital of China) demonstrates its feasibility as well as provides risk analysts and safety engineers with useful insights on effective solutions to comprehensive risk assessment of RIS in metropolitan cities. The contribution of this paper is made by the findings on the comparison of risk levels of RIS at different regions against various risk factors so that best practices from the good performer(s) can be used to improve the safety of the others. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. Potential travel cost saving in urban public-transport networks using smartphone guidance.
- Author
-
Song, Cuiying, Guan, Wei, and Ma, Jihui
- Subjects
- *
PUBLIC transit , *TRAVEL costs , *DECISION making , *MULTIAGENT systems , *TRAVEL time (Traffic engineering) - Abstract
Public transport (PT) is a key element in most major cities around the world. With the development of smartphones, available journey planning information is becoming an integral part of the PT system. Each traveler has specific preferences when undertaking a trip, and these preferences can also be reflected on the smartphone. This paper considers transit assignment in urban public-transport networks in which the passengers receive smartphone-based information containing elements that might influence the travel decisions in relation to line loads, as well as passenger benefits, and the paper discusses the transition from the current widespread choosing approach to a personalized decision-making approach based on smartphone information. The approach associated with smartphone guidance that considers passengers’ preference on travel time, waiting time and transfer is proposed in the process of obtaining his/her preferred route from the potential travel routes generated by the Deep First Search (DFS) method. Two other approaches, based on the scenarios reflecting reality, include passengers with access to no real time information, and passengers that only have access to the arrival time at the platform are used as comparisons. For illustration, the same network proposed by Spiess and Florian is utilized on the experiments in an agent-based model. Two experiments are conducted respectively according to whether each passenger’s choosing method is consistent. As expected, the results in the first experiment showed that the travel for consistent passengers with smartphone guidance was clearly shorter and that it can reduce travel time exceeding 15% and weighted cost exceeding 20%, and the average saved time approximated 3.88 minutes per passenger. The second experiment presented that travel cost, as well as cost savings, gradually decreased by employing smartphone guidance, and the maximum cost savings accounted for 14.2% of the total weighted cost. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. A novel decision tree classification based on post-pruning with Bayes minimum risk.
- Author
-
Ahmed, Ahmed Mohamed, Rizaner, Ahmet, and Ulusoy, Ali Hakan
- Subjects
- *
DECISION trees , *COMPUTATIONAL complexity , *ERROR analysis in mathematics , *INFORMATION technology , *COGNITIVE psychology - Abstract
Pruning is applied in order to combat over-fitting problem where the tree is pruned back with the goal of identifying decision tree with the lowest error rate on previously unobserved instances, breaking ties in favour of smaller trees with high accuracy. In this paper, pruning with Bayes minimum risk is introduced for estimating the risk-rate. This method proceeds in a bottom-up fashion converting a parent node of a subtree to a leaf node if the estimated risk-rate of the parent node for that subtree is less than the risk-rates of its leaf. This paper proposes a post-pruning method that considers various evaluation standards such as attribute selection, accuracy, tree complexity, and time taken to prune the tree, precision/recall scores, TP/FN rates and area under ROC. The experimental results show that the proposed method produces better classification accuracy and its complexity is not much different than the complexities of reduced-error pruning and minimum-error pruning approaches. The experiments also demonstrate that the proposed method shows satisfactory performance in terms of precision score, recall score, TP rate, FP rate and area under ROC. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera.
- Author
-
Yaghoobi Ershadi, Nastaran
- Subjects
- *
TRAFFIC monitoring , *TRAFFIC engineering , *TRAFFIC accidents , *ESTIMATION theory , *AUTOMOBILE driving in fog , *VIBRATION (Mechanics) - Abstract
Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
8. A video-based real-time adaptive vehicle-counting system for urban roads.
- Author
-
Liu, Fei, Zeng, Zhiyuan, and Jiang, Rong
- Subjects
- *
TRAFFIC flow , *CITY traffic , *ROADS , *COMPUTER vision , *TRAFFIC congestion ,DEVELOPING countries - Abstract
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. Optimizing a desirable fare structure for a bus-subway corridor.
- Author
-
Liu, Bing-Zheng, Ge, Ying-En, Cao, Kai, Jiang, Xi, Meng, Lingyun, Liu, Ding, and Gao, Yunfeng
- Subjects
- *
BUS fares , *TRANSPORTATION corridors , *PUBLIC welfare , *TRANSPORTATION , *PASSENGERS - Abstract
This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty.
- Author
-
Fathollah Bayati, Mohsen and Sadjadi, Seyed Jafar
- Subjects
- *
ELECTRIC network analysis , *ROBUST optimization , *ELECTRIC properties , *DATA envelopment analysis , *POWER distribution networks - Abstract
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. NeuroPlace: Categorizing urban places according to mental states.
- Author
-
Al-barrak, Lulwah, Kanjo, Eiman, and Younis, Eman M. G.
- Subjects
- *
PUBLIC spaces , *MENTAL health , *EMOTIONS , *ELECTROENCEPHALOGRAPHY , *REJUVENATION - Abstract
Urban spaces have a great impact on how people’s emotion and behaviour. There are number of factors that impact our brain responses to a space. This paper presents a novel urban place recommendation approach, that is based on modelling in-situ EEG data. The research investigations leverages on newly affordable Electroencephalogram (EEG) headsets, which has the capability to sense mental states such as meditation and attention levels. These emerging devices have been utilized in understanding how human brains are affected by the surrounding built environments and natural spaces. In this paper, mobile EEG headsets have been used to detect mental states at different types of urban places. By analysing and modelling brain activity data, we were able to classify three different places according to the mental state signature of the users, and create an association map to guide and recommend people to therapeutic places that lessen brain fatigue and increase mental rejuvenation. Our mental states classifier has achieved accuracy of (%90.8). NeuroPlace breaks new ground not only as a mobile ubiquitous brain monitoring system for urban computing, but also as a system that can advise urban planners on the impact of specific urban planning policies and structures. We present and discuss the challenges in making our initial prototype more practical, robust, and reliable as part of our on-going research. In addition, we present some enabling applications using the proposed architecture. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. A low latency and low power indirect topology for on-chip communication.
- Author
-
Gulzari, Usman Ali, Khan, Sarzamin, Sajid, Muhammad, Anjum, Sheraz, Torres, Frank Sill, Sarjoughian, Hessam, and Gani, Abdullah
- Subjects
- *
TOPOLOGY , *ENERGY consumption , *PHYSICAL sciences , *APPLIED mathematics , *COGNITIVE science - Abstract
This paper presents the Hybrid Scalable-Minimized-Butterfly-Fat-Tree (H-SMBFT) topology for on-chip communication. Main aspects of this work are the description of the architectural design and the characteristics as well as a comparative analysis against two established indirect topologies namely Butterfly-Fat-Tree (BFT) and Scalable-Minimized-Butterfly-Fat-Tree (SMBFT). Simulation results demonstrate that the proposed topology outperforms its predecessors in terms of performance, area and power dissipation. Specifically, it improves the link interconnectivity between routing levels, such that the number of required links isreduced. This results into reduced router complexity and shortened routing paths between any pair of communicating nodes in the network. Moreover, simulation results under synthetic as well as real-world embedded applications workloads reveal that H-SMBFT can reduce the average latency by up-to35.63% and 17.36% compared to BFT and SMBFT, respectively. In addition, the power dissipation of the network can be reduced by up-to33.82% and 19.45%, while energy consumption can be improved byup-to32.91% and 16.83% compared to BFT and SMBFT, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Vehicle modeling for the analysis of the response of detectors based on inductive loops.
- Author
-
Mocholí Belenguer, Ferran, Millana, Antonio Martínez, Mocholí Salcedo, Antonio, and Milián Sánchez, Victor
- Subjects
- *
VEHICLE models , *INTELLIGENT transportation systems , *VEHICLE detectors , *PLATING , *TRAFFIC speed - Abstract
Magnetic loops are one of the most popular and used traffic sensors because of their widely extended technology and simple mode of operation. Nevertheless, very simple models have been traditionally used to simulate the effect of the passage of vehicles on these loops. In general, vehicles have been considered simple rectangular metal plates located parallel to the ground plane at a certain height close to the vehicle chassis. However, with such a simple model, it is not possible to carry out a rigorous study to assess the performance of different models of vehicles with the aim of obtaining basic parameters such as the vehicle type, its speed or its direction in traffic. For this reason and because computer simulation and analysis have emerged as a priority in intelligent transportation systems (ITS), this paper aims to present a more complex vehicle model capable of characterizing vehicles as multiple metal plates of different sizes and heights, which will provide better results in virtual simulation environments. This type of modeling will be useful when reproducing the actual behavior of systems installed on roads based on inductive loops and will also facilitate vehicle classification and the extraction of basic traffic parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Multi-criteria group decision making based on Archimedean power partitioned Muirhead mean operators of q-rung orthopair fuzzy numbers.
- Author
-
Qin, Yuchu, Qi, Qunfen, Scott, Paul J., and Jiang, Xiangqian
- Subjects
- *
GROUP decision making , *AGGREGATION operators , *FUZZY numbers , *FUZZY sets , *EXTREME value theory , *REAL numbers - Abstract
Two critical tasks in multi-criteria group decision making (MCGDM) are to describe criterion values and to aggregate the described information to generate a ranking of alternatives. A flexible and superior tool for the first task is q-rung orthopair fuzzy number (qROFN) and an effective tool for the second task is aggregation operator. So far, nearly thirty different aggregation operators of qROFNs have been presented. Each operator has its distinctive characteristics and can work well for specific purpose. However, there is not yet an operator which can provide desirable generality and flexibility in aggregating criterion values, dealing with the heterogeneous interrelationships among criteria, and reducing the influence of extreme criterion values. To provide such an aggregation operator, Muirhead mean operator, power average operator, partitioned average operator, and Archimedean T-norm and T-conorm operations are concurrently introduced into q-rung orthopair fuzzy sets, and an Archimedean power partitioned Muirhead mean operator of qROFNs and its weighted form are presented and a MCGDM method based on the weighted operator is proposed in this paper. The generalised expressions of the two operators are firstly defined. Their properties are explored and proved and their specific expressions are constructed. On the basis of the specific expressions, a method for solving the MCGDM problems based on qROFNs is then designed. Finally, the feasibility and effectiveness of the method is demonstrated via a numerical example, a set of experiments, and qualitative and quantitative comparisons. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Analytical network process based optimum cluster head selection in wireless sensor network.
- Author
-
Farman, Haleem, Javed, Huma, Jan, Bilal, Ahmad, Jamil, Ali, Shaukat, Khalil, Falak Naz, and Khan, Murad
- Subjects
- *
WIRELESS sensor networks , *WEATHER forecasting , *MULTIPLE criteria decision making , *SENSITIVITY analysis , *SIMULATION methods & models - Abstract
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
- Author
-
Bi, Xia-an and Zhao, Junxia
- Subjects
- *
COMPUTER networks , *CLASSIFICATION algorithms , *EXPECTATION-maximization algorithms , *BACKTRACK programming , *SIMULATION methods & models - Abstract
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
- Author
-
Zeng, Ping, Tan, Qingping, Meng, Xiankai, Shao, Zeming, Xie, Qinzheng, Yan, Ying, Cao, Wei, and Xu, Jianjun
- Subjects
- *
UNIFORM Resource Locators , *COMPUTER algorithms , *CLOUD computing , *HTTP (Computer network protocol) , *HASHING - Abstract
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Home detection of freezing of gait using support vector machines through a single waist-worn triaxial accelerometer.
- Author
-
Rodríguez-Martín, Daniel, Samà, Albert, Pérez-López, Carlos, Català, Andreu, Moreno Arostegui, Joan M., Cabestany, Joan, Bayés, Àngels, Alcaine, Sheila, Mestre, Berta, Prats, Anna, Crespo, M. Cruz, Counihan, Timothy J., Browne, Patrick, Quinlan, Leo R., ÓLaighin, Gearóid, Sweeney, Dean, Lewy, Hadas, Azuri, Joseph, Vainstein, Gabriel, and Annicchiarico, Roberta
- Subjects
- *
SUPPORT vector machines , *PARKINSON'S disease , *ACCELEROMETERS , *GAIT in humans , *WEARABLE technology - Abstract
Among Parkinson’s disease (PD) symptoms, freezing of gait (FoG) is one of the most debilitating. To assess FoG, current clinical practice mostly employs repeated evaluations over weeks and months based on questionnaires, which may not accurately map the severity of this symptom. The use of a non-invasive system to monitor the activities of daily living (ADL) and the PD symptoms experienced by patients throughout the day could provide a more accurate and objective evaluation of FoG in order to better understand the evolution of the disease and allow for a more informed decision-making process in making adjustments to the patient’s treatment plan. This paper presents a new algorithm to detect FoG with a machine learning approach based on Support Vector Machines (SVM) and a single tri-axial accelerometer worn at the waist. The method is evaluated through the acceleration signals in an outpatient setting gathered from 21 PD patients at their home and evaluated under two different conditions: first, a generic model is tested by using a leave-one-out approach and, second, a personalised model that also uses part of the dataset from each patient. Results show a significant improvement in the accuracy of the personalised model compared to the generic model, showing enhancement in the specificity and sensitivity geometric mean (GM) of 7.2%. Furthermore, the SVM approach adopted has been compared to the most comprehensive FoG detection method currently in use (referred to as MBFA in this paper). Results of our novel generic method provide an enhancement of 11.2% in the GM compared to the MBFA generic model and, in the case of the personalised model, a 10% of improvement with respect to the MBFA personalised model. Thus, our results show that a machine learning approach can be used to monitor FoG during the daily life of PD patients and, furthermore, personalised models for FoG detection can be used to improve monitoring accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Modeling Day-to-day Flow Dynamics on Degradable Transport Network.
- Author
-
Gao, Bo, Zhang, Ronghui, and Lou, Xiaoming
- Subjects
- *
DYNAMICAL systems , *UNIQUENESS (Mathematics) , *STOCHASTIC convergence , *DECISION making , *SOFTWARE engineering , *SOURCE code - Abstract
Stochastic link capacity degradations are common phenomena in transport network which can cause travel time variations and further can affect travelers’ daily route choice behaviors. This paper formulates a deterministic dynamic model, to capture the day-to-day (DTD) flow evolution process in the presence of degraded link capacity degradations. The aggregated network flow dynamics are driven by travelers’ study of uncertain travel time and their choice of risky routes. This paper applies the exponential-smoothing filter to describe travelers’ study of travel time variations, and meanwhile formulates risk attitude parameter updating equation to reflect travelers’ endogenous risk attitude evolution schema. In addition, this paper conducts theoretical analyses to investigate several significant mathematical characteristics implied in the proposed DTD model, including fixed point existence, uniqueness, stability and irreversibility. Numerical experiments are used to demonstrate the effectiveness of the DTD model and verify some important dynamic system properties. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. A New Cross-By-Pass-Torus Architecture Based on CBP-Mesh and Torus Interconnection for On-Chip Communication.
- Author
-
Gulzari, Usman Ali, Sajid, Muhammad, Anjum, Sheraz, Agha, Shahrukh, and Torres, Frank Sill
- Subjects
- *
ENERGY consumption , *INTEGRATED circuit interconnections , *TELECOMMUNICATION systems , *ARCHITECTURAL design , *TOPOLOGY - Abstract
A Mesh topology is one of the most promising architecture due to its regular and simple structure for on-chip communication. Performance of mesh topology degraded greatly by increasing the network size due to small bisection width and large network diameter. In order to overcome this limitation, many researchers presented modified Mesh design by adding some extra links to improve its performance in terms of network latency and power consumption. The Cross-By-Pass-Mesh was presented by us as an improved version of Mesh topology by intelligent addition of extra links. This paper presents an efficient topology named Cross-By-Pass-Torus for further increase in the performance of the Cross-By-Pass-Mesh topology. The proposed design merges the best features of the Cross-By-Pass-Mesh and Torus, to reduce the network diameter, minimize the average number of hops between nodes, increase the bisection width and to enhance the overall performance of the network. In this paper, the architectural design of the topology is presented and analyzed against similar kind of 2D topologies in terms of average latency, throughput and power consumption. In order to certify the actual behavior of proposed topology, the synthetic traffic trace and five different real embedded application workloads are applied to the proposed as well as other competitor network topologies. The simulation results indicate that Cross-By-Pass-Torus is an efficient candidate among its predecessor’s and competitor topologies due to its less average latency and increased throughput at a slight cost in network power and energy for on-chip communication. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Control Strategies for the DAB Based PV Interface System.
- Author
-
El-Helw, Hadi M., Al-Hasheem, Mohamed, and Marei, Mostafa I.
- Subjects
- *
CASCADE converters , *PHOTOVOLTAIC power systems , *ENERGY harvesting , *PHASE shifters , *MAXIMUM power point trackers - Abstract
This paper presents an interface system based on the Dual Active Bridge (DAB) converter for Photovoltaic (PV) arrays. Two control strategies are proposed for the DAB converter to harvest the maximum power from the PV array. The first strategy is based on a simple PI controller to regulate the terminal PV voltage through the phase shift angle of the DAB converter. The Perturb and Observe (P&O) Maximum Power Point Tracking (MPPT) technique is utilized to set the reference of the PV terminal voltage. The second strategy presented in this paper employs the Artificial Neural Network (ANN) to directly set the phase shift angle of the DAB converter that results in harvesting maximum power. This feed-forward strategy overcomes the stability issues of the feedback strategy. The proposed PV interface systems are modeled and simulated using MATLAB/SIMULINK and the EMTDC/PSCAD software packages. The simulation results reveal accurate and fast response of the proposed systems. The dynamic performance of the proposed feed-forward strategy outdoes that of the feedback strategy in terms of accuracy and response time. Moreover, an experimental prototype is built to test and validate the proposed PV interface system. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Methodology and model-based DSS to managing the reallocation of inventory to orders in LHP situations. Application to the ceramics sector.
- Author
-
Oltra-Badenes, Raul, Gil-Gomez, Hermenegildo, Merigo, Jose M., and Palacios-Marques, Daniel
- Subjects
- *
DECISION support systems , *INVENTORIES , *CERAMICS , *MANUFACTURING processes , *LABOR economics , *OTOACOUSTIC emissions , *PHYSICAL sciences - Abstract
Lack of homogeneity in the product (LHP) is a problem when customers require homogeneous units of a single product. In such cases, the optimal allocation of inventory to orders becomes much more complex. Furthermore, in an MTS environment, an optimal initial allocation may become less than ideal over time, due to different circumstances. This problem occurs in the ceramics sector, where the final product varies in tone and calibre. This paper proposes a methodology for the reallocation of inventory to orders in LHP situation (MERIO-LHP) and a model-based decision-support system (DSS) to support the methodology, which enables an optimal reallocation of inventory to order lines to be carried out in real businesses environments in which LHP is inherent. The proposed methodology and model-based DSS were validated by applying it to a real case at a ceramics company. The analysis of the results indicates that considerable improvements can be obtained with regard to the quantity of orders fulfilled and sales turnover. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Investigation of household private car ownership considering interdependent consumer preference.
- Author
-
Wu, Na and Tang, Chunyan
- Subjects
- *
AUTOMOBILE ownership , *MARKOV chain Monte Carlo , *CONSUMER preferences , *HOUSEHOLDS , *AUTOREGRESSIVE models , *NEWTON-Raphson method - Abstract
People are connected by various social networks, resulting in the interdependence of consumer choice. Therefore, it is very important and realistic to assume choice interdependence in private car ownership modeling. In this paper, we investigate the interdependence of private car ownership choice using a spatial autoregressive binary probit model estimated by the Bayesian Markov chain Monte Carlo (MCMC) method. Constructing the autoregressive matrix demographically shows that the private car ownership choice of a household is dependent on other household choices. Compared with the pure binary probit model estimated by the MCMC method, the spatial autoregressive model achieves a significant improvement both in loglikelihood value and log marginal density value, which are calculated using the importance sampling method of Newton and Raftery, from approximately -202 to approximately -63 and from -208 to -145, respectively. Moreover, the results indicated by the spatial autoregressive probit model suggest that the number of children, the ownership of an apartment or the availability of a parking lot are positively and significantly associated with the private car ownership level. To test the out-of-sample performance of the model, we estimate the model using 600 data points and test it using another 148 data points. The results indicate that the predictive power is greatly improved. Finally, we analyze the augmented parameter and discover that it is associated with the parking variable in addition to the license variable. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Cognition difference between players of different involvement toward the concrete design features in music games.
- Author
-
Chen, Yi-Chen and Li, Shyue-Ran
- Subjects
- *
PATTERN perception , *MUSIC software , *MOBILE games , *COGNITION , *GAMES , *MULTIPLE regression analysis , *SENSORY perception - Abstract
When designing mobile games, how to understand preferences and cognition of players is a topic worth exploring. The main objectives of this paper are to obtain design features of music games on mobile devices, and explore players’ perceptions toward music games. The results can serve as an orientation during decision-making in game design. Based on Miryoku Engineering and the Evaluation Grid Method, this study interviewed 22 frequent users to get concrete features of game design; Moreover, 210 subjects were divided into high, medium, and low involvement groups according to CIP measures, and then this study used Multiple Regression analysis to determine whether players with different levels of involvement had different perceptions of the design features of music games. The results found 44 concrete features and six original evaluations items of game design, and also discovered that there were perception differences in different involvement groups, and only two concrete design features significantly influenced all three groups: ‘Extra games to earn more points after completing levels’ and ‘Playable without internet’. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Ex-ante online risk assessment for building emergency evacuation through multimedia data.
- Author
-
Zhang, Haoran, Song, Xuan, Song, Xiaoya, Huang, Dou, Xu, Ning, Shibasaki, Ryosuke, and Liang, Yongtu
- Subjects
- *
BUILDING evacuation , *RISK assessment , *MULTIMEDIA systems , *SOCIAL forces , *DEEP learning , *TELEVISION in security systems - Abstract
Ex-ante online risk assessment for building emergency evacuation is essential to protect human life and property. Current risk assessment methods are limited by the tradeoff between accuracy and efficiency. In this paper, we propose an online method that overcomes this tradeoff based on multimedia data (e.g. videos data from surveillance cameras) and deep learning. The method consists of two parts. The first estimates the evacuee position as input for training the assessment model to then perform risk assessment in real scenarios. The second considers a social force model based on the evacuation simulation for the output of training model. We verify the proposed method in simulation and real scenarios. Model sensitivity analyses and large-scale tests demonstrate the usability and superiority of the proposed method. By the method, the computation time of risk assessment could be decreased from 10 minutes (by traditional simulation method) to 2.18 s. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Energy efficient partition allocation in mixed-criticality systems.
- Author
-
Guasque, Ana, Balbastre, Patricia, Crespo, Alfons, and Peiró, Salva
- Subjects
- *
PHYSICAL sciences , *LIFE sciences , *CONSTRAINT programming , *CENTRAL processing units , *COGNITIVE science - Abstract
This paper addresses the problem of energy management of mixed criticality applications in a multi-core partitioned architecture. Instead of focusing on new scheduling algorithms to adjust frequency in order to save energy, we propose a partition to CPU allocation that takes into account not only the different frequencies at which the CPU can operate but the level of criticality of the partitions. The goal is to provide a set of pre-calculated allocations, called profiles, so at run time the system can switch to different modes depending on the battery level. These profiles achieve different levels of energy saving and performance applying different strategies. We also present a comparison in terms of energy saving of the most used bin-packing algorithms for partition allocation. As this is an heuristic, it is not possible to ensure that our results involve the minimum energy consumption. For this reason, we also provide a comparison with a exact method, such as constraint programming. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Effort-aware and just-in-time defect prediction with neural network.
- Author
-
Qiao, Lei and Wang, Yan
- Subjects
- *
JUST-in-time systems , *ARTIFICIAL neural networks , *SOFTWARE engineering , *PREDICTION models , *DEEP learning - Abstract
Effort-aware just-in-time (JIT) defect prediction is to rank source code changes based on the likelihood of detects as well as the effort to inspect such changes. Accurate defect prediction algorithms help to find more defects with limited effort. To improve the accuracy of defect prediction, in this paper, we propose a deep learning based approach for effort-aware just-in-time defect prediction. The key idea of the proposed approach is that neural network and deep learning could be exploited to select useful features for defect prediction because they have been proved excellent at selecting useful features for classification and regression. First, we preprocess ten numerical metrics of code changes, and then feed them to a neural network whose output indicates how likely the code change under test contains bugs. Second, we compute the benefit cost ratio for each code change by dividing the likelihood by its size. Finally, we rank code changes according to their benefit cost ratio. Evaluation results on a well-known data set suggest that the proposed approach outperforms the state-of-the-art approaches on each of the subject projects. It improves the average recall and popt by 15.6% and 8.1%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. A novel financial risk assessment model for companies based on heterogeneous information and aggregated historical data.
- Author
-
Li, Dan-Ping, Cheng, Si-Jie, Cheng, Peng-Fei, Wang, Jian-Qiang, and Zhang, Hong-Yu
- Subjects
- *
FINANCIAL risk management , *INFORMATION retrieval , *DATA analysis , *FUZZY sets , *MULTIPLE criteria decision making - Abstract
The financial risk not only affects the development of the company itself, but also affects the economic development of the whole society; therefore, the financial risk assessment of company is an important part. At present, numerous methods of financial risk assessment have been researched by scholars. However, most of the extant methods neither integrated fuzzy sets with quantitative analysis, nor took into account the historical data of the past few years. To settle these defects, this paper proposes a novel financial risk assessment model for companies based on heterogeneous multiple-criteria decision-making (MCDM) and historical data. Subjective and objective indexes are comprehensively taken into consideration in the financial risk assessment index system of the model, which combines fuzzy theory with quantitative data analysis. Moreover, the assessment information obtained from historical financial information of company, credit rating agency and decision makers, including crisp numbers, triangular fuzzy numbers and neutrosophic numbers. Furthermore, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is used to determine the ranking order of companies according to their financial risk. Finally, an empirical study of financial risk assessment for companies is conducted, and the results of comparative analysis and sensitivity analysis suggest that the proposed model can effectively and reliably obtain the company with the lowest financial risk. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. Resilience as an emergent property of human-infrastructure dynamics: A multi-agent simulation model for characterizing regime shifts and tipping point behaviors in infrastructure systems.
- Author
-
Rasoulkhani, Kambiz and Mostafavi, Ali
- Subjects
- *
PSYCHOLOGICAL resilience , *INFRASTRUCTURE (Economics) , *DEMOGRAPHIC change , *SIMULATION methods & models , *MULTIAGENT systems , *DATA analysis - Abstract
The objective of this study is to establish a framework for analyzing infrastructure dynamics affecting the long-term steady state, and hence resilience in civil infrastructure systems. To this end, a multi-agent simulation model was created to capture important phenomena affecting the dynamics of coupled human-infrastructure systems and model the long-term performance regimes of infrastructure. The proposed framework captures the following three factors that shape the dynamics of coupled human-infrastructure systems: (i) engineered physical infrastructure; (ii) human actors; and (iii) chronic and acute stressors. A complex system approach was adopted to examine the long-term resilience of infrastructure based on the understanding of performance regimes, as well as tipping points at which shifts in the performance regime of infrastructure occur under the impact of external stressors and/or change in internal dynamics. The application of the proposed framework is demonstrated in a case of urban water distribution infrastructure using the data from a numerical case study network. The developed multi-agent simulation model was then used in examining the system resilience over a 100-year horizon under stressors such as population change and funding constraints. The results identified the effects of internal dynamics and external stressors on the resilience landscape of infrastructure systems. Furthermore, the results also showed the capability of the framework in capturing and simulating the underlying mechanisms affecting human-infrastructure dynamics, as well as long-term regime shifts and tipping point behaviors. Therefore, the integrated framework proposed in this paper enables building complex system-based theories for a more advanced understanding of civil infrastructure resilience. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Exploring efficient grouping algorithms in regular expression matching.
- Author
-
Xu, Chengcheng, Su, Jinshu, and Chen, Shuhui
- Subjects
- *
DEEP packet inspection (Computer security) , *AUTOMATICITY (Learning process) , *DATA structures , *HEURISTIC , *SEMANTICS - Abstract
Background: Regular expression matching (REM) is widely employed as the major tool for deep packet inspection (DPI) applications. For automatic processing, the regular expression patterns need to be converted to a deterministic finite automata (DFA). However, with the ever-increasing scale and complexity of pattern sets, state explosion problem has brought a great challenge to the DFA based regular expression matching. Rule grouping is a direct method to solve the state explosion problem. The original rule set is divided into multiple disjoint groups, and each group is compiled to a separate DFA, thus to significantly restrain the severe state explosion problem when compiling all the rules to a single DFA. Objective: For practical implementation, the total number of DFA states should be as few as possible, thus the data structures of these DFAs can be deployed on fast on-chip memories for rapid access. In addition, to support fast pattern update in some applications, the time cost for grouping should be as small as possible. In this study, we aimed to propose an efficient grouping method, which generates as few states as possible with as little time overhead as possible. Methods: When compiling multiple patterns into a single DFA, the number of DFA states is usually greater than the total number of states when compiling each pattern to a separate DFA. This is mainly caused by the semantic overlaps among different rules. By quantifying the interaction values for each pair of rules, the rule grouping problem can be reduced to the maximum k-cut graph partitioning problem. Then, we propose a heuristic algorithm called the one-step greedy (OSG) algorithm to solve this NP-hard problem. What’s more, a subroutine named the heuristic initialization (HI) algorithm is devised to further optimize the grouping algorithms. Results: We employed three practical rule sets for the experimental evaluation. Results show that the OSG algorithm outperforms the state-of-the-art grouping solutions regarding both the total number of DFA states and time cost for grouping. The HI subroutine also demonstrates its significant optimization effect on the grouping algorithms. Conclusions: The DFA state explosion problem has became the most challenging issue in the regular expression matching applications. Rule grouping is a practical direction by dividing the original rule sets into multiple disjoint groups. In this paper, we investigate the current grouping solutions, and propose a compact and efficient grouping algorithm. Experiments conducted on practical rule sets demonstrate the superiority of our proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Power aware routing algorithms (PARA) in wireless mesh networks for emergency management.
- Author
-
Al-Hadhrami, Tawfik, Saeed, Faisal, and Olajide, Funminiyi
- Subjects
- *
POWER aware computing , *ROUTING (Computer network management) , *WIRELESS communications , *MESH networks , *EMERGENCY management - Abstract
Wireless Mesh Networks (WMNs) integrate the advantages of WLANs and mobile Ad Hoc networks, which have become the key techniques of next-generation wireless networks in the context of emergency recovery. Wireless Mesh Networks (WMNs) are multi-hop wireless networks with instant deployment, self-healing, self-organization and self-configuration features. These capabilities make WMNs a promising technology for incident and emergency communication. An incident area network (IAN) needs a reliable and lively routing path during disaster recovery and emergency response operations when infrastructure-based communications and power resources have been destroyed and no routes are available. Power aware routing plays a significant role in WMNs, in order to provide continuous efficient emergency services. The existing power aware routing algorithms used in wireless networks cannot fully fit the characteristics of WMNs, to be used for emergency recovery. This paper proposes a power aware routing algorithm (PARA) for WMNs, which selects optimal paths to send packets, mainly based on the power level of next node along the path. This algorithm was implemented and tested in a proven simulator. The analytic results show that the proposed power node-type aware routing algorithm metric can clearly improve the network performance by reducing the network overheads and maintaining a high delivery ratio with low latency. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. CFSH: Factorizing sequential and historical purchase data for basket recommendation.
- Author
-
Wang, Pengfei, Chen, Jiansheng, and Niu, Shaozhang
- Subjects
- *
PURCHASING , *FINANCIAL leverage , *COLLABORATIVE commerce , *PREDICTION theory , *TASK performance - Abstract
To predict what products customers will buy in next transaction is an important task. Existing work in next-basket prediction can be summarized into two paradigms. One is the item-centric paradigm, where sequential patterns are mined from customers’ transactional data and leveraged for prediction. However, these approaches usually suffer from the data sparseness problem. The other is the user-centric paradigm, where collaborative filtering techniques have been applied on customers’ historical data. However, these methods ignore the sequential behaviors of customers which is often crucial for next-basket prediction. In this paper, we introduce a hybrid method, namely the Co-Factorization model over Sequential and Historical purchase data (CFSH for short) for next-basket recommendation. Compared with existing methods, our approach conveys the following merits: 1) By mining global sequential patterns, we can avoid the sparseness problem in traditional item-centric methods; 2) By factorizing product-product and customer-product matrices simultaneously, we can fully exploit both sequential and historical behaviors to learn customer and product representations better; 3) By using a hybrid recommendation method, we can achieve better performance in next-basket prediction. Experimental results on three real-world purchase datasets demonstrated the effectiveness of our approach as compared with the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. An improved memory-based collaborative filtering method based on the TOPSIS technique.
- Author
-
Al-bashiri, Hael, Abdulgabber, Mansoor Abdullateef, Romli, Awanis, and Kahtan, Hasan
- Subjects
- *
NEUROSCIENCES , *COMPUTER science , *RECOMMENDER systems , *TOPSIS method , *LIFE sciences - Abstract
This paper describes an approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method. Recommender systems are used to filter the huge amount of data available online based on user-defined preferences. Collaborative filtering (CF) is a commonly used recommendation approach that generates recommendations based on correlations among user preferences. Although several enhancements have increased the accuracy of memory-based CF through the development of improved similarity measures for finding successful neighbors, there has been less investigation into prediction score methods, in which rating/preference scores are assigned to items that have not yet been selected by a user. A TOPSIS solution for evaluating multiple alternatives based on more than one criterion is proposed as an alternative to prediction score methods for evaluating and ranking items based on the results from similar users. The recommendation accuracy of the proposed TOPSIS technique is evaluated by applying it to various common CF baseline methods, which are then used to analyze the MovieLens 100K and 1M benchmark datasets. The results show that CF based on the TOPSIS method is more accurate than baseline CF methods across a number of common evaluation metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. The stability of memristive multidirectional associative memory neural networks with time-varying delays in the leakage terms via sampled-data control.
- Author
-
Wang, Weiping, Yu, Xin, Luo, Xiong, Wang, Long, Li, Lixiang, Kurths, Jürgen, Zhao, Wenbing, and Xiao, Jiuhong
- Subjects
- *
ASSOCIATIVE memory (Psychology) , *BIOLOGICAL neural networks , *TIME-varying systems , *LYAPUNOV functions , *NEUROSCIENCES - Abstract
In this paper, we propose a new model of memristive multidirectional associative memory neural networks, which concludes the time-varying delays in leakage terms via sampled-data control. We use the input delay method to turn the sampling system into a continuous time-delaying system. Then we analyze the exponential stability and asymptotic stability of the equilibrium points for this model. By constructing a suitable Lyapunov function, using the Lyapunov stability theorem and some inequality techniques, some sufficient criteria for ensuring the stability of equilibrium points are obtained. Finally, numerical examples are given to demonstrate the effectiveness of our results. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
35. Approximate sparse spectral clustering based on local information maintenance for hyperspectral image classification.
- Author
-
Yan, Qing, Ding, Yun, Zhang, Jing-Jing, Xun, Li-Na, and Zheng, Chun-Hou
- Subjects
- *
HYPERSPECTRAL imaging systems , *COMPUTATIONAL complexity , *CLUSTER analysis (Statistics) , *APPROXIMATION theory , *APPLIED mathematics - Abstract
Sparse spectral clustering (SSC) has become one of the most popular clustering approaches in recent years. However, its high computational complexity prevents its application to large-scale datasets such as hyperspectral images (HSIs). In this paper, we propose two efficient approximate sparse spectral clustering methods for HSIs clustering in which clustering performance is improved by utilizing local information among the data. Firstly, we construct a smaller representative dataset on which sparse spectral clustering is performed. Then the labels of ground object are extending to whole dataset based on the local information according to two extending strategies. The first one is that the local interpolation is utilized to improve the extension of the clustering result. The other one is that the label extension is turned to a problem of subspace embedding, and is fulfilled by locally linear embedding (LLE). Several experiments on HSIs demonstrated that the proposed algorithms are effective for HSIs clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. An efficient control flow validation method using redundant computing capacity of dual-processor architecture.
- Author
-
Wang, Qingran, Guo, Wei, and Wei, Jizeng
- Subjects
- *
MICROPROCESSORS , *COMPUTER hackers , *COMPUTER circuits , *COMPUTER storage capacity , *ALGORITHMS - Abstract
Microprocessors in safety-critical system are extremely vulnerable to hacker attacks and circuit crosstalk, as they can modify binaries and lead programs to run along the wrong control flow paths. It is a significant challenge to design a run-time validation method with few hardware modification. In this paper, an efficient control flow validation method named DCM (Dual-Processor Control Flow Validation Method) is proposed basing on dual-processor architecture. Since a burst of memory-access-intensive instructions could block pipeline and cause lots of waiting clocks, the DCM assigns the idle pipeline cycles of the blocked processor to the other processor to validate control flow at run time. An extra lightweight monitor unit in each processor is needed and a special dual-processor communication protocol is also designed to schedule the redundant computing capacity between two processors to do validation tasks better. To further improve the efficiency, we also design a software-based self-validation algorithm to help reduce validation times. The combination of both hardware method and software method can speed up the validation procedure and protect the control flow paths with different emphasis. The cycle-accurate simulator GEM5 is used to simulate two ARMv7-A processors with out-of-order pipeline. Experiment shows the performance overhead of DCM is less than 22% on average across the SPEC 2006 benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Market Competitiveness Evaluation of Mechanical Equipment with a Pairwise Comparisons Hierarchical Model.
- Author
-
Hou, Fujun
- Subjects
- *
ECONOMIC competition , *DECISION making , *COMPARATIVE studies , *INDUSTRIAL organization (Economic theory) , *EIGENVALUES - Abstract
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. The Augmented Lagrange Multipliers Method for Matrix Completion from Corrupted Samplings with Application to Mixed Gaussian-Impulse Noise Removal.
- Author
-
Meng, Fan, Yang, Xiaomei, and Zhou, Chenghu
- Subjects
- *
GAUSSIAN distribution , *IMAGE analysis , *MACHINE learning , *IMAGE processing , *BIOINFORMATICS , *COMPUTER vision - Abstract
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. Electric multiple unit circulation plan optimization based on the branch-and-price algorithm under different maintenance management schemes.
- Author
-
Li, Wenjun, Nie, Lei, and Zhang, Tianwei
- Subjects
- *
RAILROAD maintenance & repair , *ELECTRIC multiple units , *MATHEMATICAL optimization , *ALGORITHMS , *DECISION making - Abstract
For railway operators, one of many important goals is to improve the utilization efficiency of electric multiple units (EMUs). When operators design EMU circulation plans, EMU type restrictions are critical factors when assigning EMUs to the correct depots for maintenance. However, existing studies only consider that EMUs are maintained at their home depots. However, targeting that problem, in this paper, an optimization model for the EMU circulation planning problem that allows depots to be selected for EMU maintenance is proposed. This model aims at optimizing the number of used EMUs and the number of EMU maintenance tasks and simultaneously incorporates other important constraints, including type restrictions, on EMU maintenance and night accommodation capacity at depots. In order to solve the model, a branch-and-price algorithm is also developed. A case study of a real-world high-speed railway was conducted to compare and analyze the effects of different maintenance location constraints. The results show that the number of EMUs used will decrease under the maintenance sharing scheme, the number of EMU maintenance tasks can be reduced, and the time occupied in EMU maintenance will be released. In addition, the scheme of maintenance resources sharing and increases to mileage limits can effectively decrease the number of EMU maintenance tasks significantly. The model and algorithm can be used as an effective quantitative analysis tool for railway operators' decision-making processes in the EMU circulation planning problem. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Value of sample information in dynamic, structurally uncertain resource systems.
- Author
-
Williams, Byron K. and Johnson, Fred A.
- Subjects
- *
INFORMATION resources management , *INFORMATION retrieval , *CONSERVATION of natural resources , *INFORMATION resources , *T cells - Abstract
Few if any natural resource systems are completely understood and fully observed. Instead, there almost always is uncertainty about the way a system works and its status at any given time, which can limit effective management. A natural approach to uncertainty is to allocate time and effort to the collection of additional data, on the reasonable assumption that more information will facilitate better understanding and lead to better management. But the collection of more data, either through observation or investigation, requires time and effort that often can be put to other conservation activities. An important question is whether the use of limited resources to improve understanding is justified by the resulting potential for improved management. In this paper we address directly a change in value from new information collected through investigation. We frame the value of information in terms of learning through the management process itself, as well as learning through investigations that are external to the management process but add to our base of understanding. We provide a conceptual framework and metrics for this issue, and illustrate them with examples involving Florida scrub-jays (Aphelocoma coerulescens). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Self-supervised sparse coding scheme for image classification based on low rank representation.
- Author
-
Chen, Deyun, Sun, Guanglu, Lin, Kezheng, Li, Ao, and Wu, Zhiqiang
- Subjects
- *
COMPRESSED sensing , *SPARSE matrices , *IMAGE analysis , *CODING theory , *ALGORITHMS , *EXPERIMENTS - Abstract
Recently, sparse representation, which relies on the underlying assumption that samples can be sparsely represented by their labeled neighbors, has been applied with great success to image classification problems. Through sparse representation-based classification (SRC), the label can be assigned with minimum residual between the sample and its synthetic version with class-specific coding, which means that the coding scheme is the most significant factor for classification accuracy. However, conventional SRC-based coding schemes ignore dependency among the samples, which leads to an undesired result that similar samples may be coded into different categories due to quantization sensitivity. To address this problem, in this paper, a novel approach based on self-supervised sparse representation is proposed for image classification. In the proposed approach, the manifold structure of samples is firstly exploited with low rank representation. Next, the low-rank representation matrix is used to characterize the similarity of samples in order to establish a self-supervised sparse coding model, which aims to preserve the local structure of codings for similar samples. Finally, a numerical algorithm utilizing the alternating direction method of multipliers (ADMM) is developed to obtain the approximate solution. Experiments on several publicly available datasets validate the effectiveness and efficiency of our proposed approach compared with existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Biometric recognition via texture features of eye movement trajectories in a visual searching task.
- Author
-
Li, Chunyong, Xue, Jiguo, Quan, Cheng, Yue, Jingwei, and Zhang, Chenggang
- Subjects
- *
EYE movements , *BIOMETRIC identification , *FEATURE extraction , *TEXTURE analysis (Image processing) , *TASK performance - Abstract
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. A shared synapse architecture for efficient FPGA implementation of autoencoders.
- Author
-
Suzuki, Akihiro, Morie, Takashi, and Tamukoh, Hakaru
- Subjects
- *
FIELD programmable analog arrays , *SYNAPSES , *ELECTRIC circuits , *ELECTRIC networks , *ADAPTIVE computing systems - Abstract
This paper proposes a shared synapse architecture for autoencoders (AEs), and implements an AE with the proposed architecture as a digital circuit on a field-programmable gate array (FPGA). In the proposed architecture, the values of the synapse weights are shared between the synapses of an input and a hidden layer, and between the synapses of a hidden and an output layer. This architecture utilizes less of the limited resources of an FPGA than an architecture which does not share the synapse weights, and reduces the amount of synapse modules used by half. For the proposed circuit to be implemented into various types of AEs, it utilizes three kinds of parameters; one to change the number of layers’ units, one to change the bit width of an internal value, and a learning rate. By altering a network configuration using these parameters, the proposed architecture can be used to construct a stacked AE. The proposed circuits are logically synthesized, and the number of their resources is determined. Our experimental results show that single and stacked AE circuits utilizing the proposed shared synapse architecture operate as regular AEs and as regular stacked AEs. The scalability of the proposed circuit and the relationship between the bit widths and the learning results are also determined. The clock cycles of the proposed circuits are formulated, and this formula is used to estimate the theoretical performance of the circuit when the circuit is used to construct arbitrary networks. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Congestion patterns of electric vehicles with limited battery capacity.
- Author
-
Jing, Wentao, Ramezani, Mohsen, An, Kun, and Kim, Inhi
- Subjects
- *
ELECTRIC vehicle batteries , *ELECTRIC vehicle charging stations , *ENERGY consumption , *TRAFFIC congestion , *TRAFFIC assignment - Abstract
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups.
- Author
-
Amir, Ofra, Amir, Dor, Shahar, Yuval, Hart, Yuval, and Gal, Kobi
- Subjects
- *
SOCIAL psychology , *SOCIOLOGY , *GROUP decision making , *COGNITIVE science , *COMPUTATIONAL complexity - Abstract
Demonstrability—the extent to which group members can recognize a correct solution to a problem—has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors—the difficulty of solving a problem and the difficulty of verifying the correctness of a solution—on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. Towards the use of similarity distances to music genre classification: A comparative study.
- Author
-
Goienetxea, Izaro, Martínez-Otzeta, José María, Sierra, Basilio, and Mendialdua, Iñigo
- Subjects
- *
POPULAR music genres , *ETHNOLOGY , *CLUSTER analysis (Statistics) , *COMPARATIVE studies , *ALGORITHMS - Abstract
Music genre classification is a challenging research concept, for which open questions remain regarding classification approach, music piece representation, distances between/within genres, and so on. In this paper an investigation on the classification of generated music pieces is performed, based on the idea that grouping close related known pieces in different sets –or clusters– and then generating in an automatic way a new song which is somehow “inspired” in each set, the new song would be more likely to be classified as belonging to the set which inspired it, based on the same distance used to separate the clusters. Different music pieces representations and distances among pieces are used; obtained results are promising, and indicate the appropriateness of the used approach even in a such a subjective area as music genre classification is. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. Face recognition algorithm using extended vector quantization histogram features.
- Author
-
Yan, Yan, Lee, Feifei, Wu, Xueqian, and Chen, Qiu
- Subjects
- *
HUMAN facial recognition software , *VECTOR quantization , *ALGORITHMS , *COGNITIVE psychology , *ARTIFICIAL intelligence - Abstract
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Reactive navigation in extremely dense and highly intricate environments.
- Author
-
Antich Tobaruela, Javier and Ortiz Rodríguez, Alberto
- Subjects
- *
ROBOT control systems , *AUTONOMOUS robots , *SIMULATION methods & models , *COMPUTER simulation , *MACHINE theory - Abstract
Reactive navigation is a well-known paradigm for controlling an autonomous mobile robot, which suggests making all control decisions through some light processing of the current/recent sensor data. Among the many advantages of this paradigm are: 1) the possibility to apply it to robots with limited and low-priced hardware resources, and 2) the fact of being able to safely navigate a robot in completely unknown environments containing unpredictable moving obstacles. As a major disadvantage, nevertheless, the reactive paradigm may occasionally cause robots to get trapped in certain areas of the environment—typically, these conflicting areas have a large concave shape and/or are full of closely-spaced obstacles. In this last respect, an enormous effort has been devoted to overcome such a serious drawback during the last two decades. As a result of this effort, a substantial number of new approaches for reactive navigation have been put forward. Some of these approaches have clearly improved the way how a reactively-controlled robot can move among densely cluttered obstacles; some other approaches have essentially focused on increasing the variety of obstacle shapes and sizes that could be successfully circumnavigated; etc. In this paper, as a starting point, we choose the best existing reactive approach to move in densely cluttered environments, and we also choose the existing reactive approach with the greatest ability to circumvent large intricate-shaped obstacles. Then, we combine these two approaches in a way that makes the most of them. From the experimental point of view, we use both simulated and real scenarios of challenging complexity for testing purposes. In such scenarios, we demonstrate that the combined approach herein proposed clearly outperforms the two individual approaches on which it is built. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data.
- Author
-
Ye, Fei
- Subjects
- *
ARTIFICIAL neural networks , *DATA mining , *PARTICLE swarm optimization , *PARAMETER estimation , *ALGORITHMS - Abstract
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. An adaptive map-matching algorithm based on hierarchical fuzzy system from vehicular GPS data.
- Author
-
Tang, Jinjun, Zhang, Shen, Zou, Yajie, and Liu, Fang
- Subjects
- *
FUZZY systems , *LAUNCH vehicle trajectories , *HISTORICAL distance , *GLOBAL Positioning System , *ALGORITHMS - Abstract
An improved hierarchical fuzzy inference method based on C-measure map-matching algorithm is proposed in this paper, in which the C-measure represents the certainty or probability of the vehicle traveling on the actual road. A strategy is firstly introduced to use historical positioning information to employ curve-curve matching between vehicle trajectories and shapes of candidate roads. It improves matching performance by overcoming the disadvantage of traditional map-matching algorithm only considering current information. An average historical distance is used to measure similarity between vehicle trajectories and road shape. The input of system includes three variables: distance between position point and candidate roads, angle between driving heading and road direction, and average distance. As the number of fuzzy rules will increase exponentially when adding average distance as a variable, a hierarchical fuzzy inference system is then applied to reduce fuzzy rules and improve the calculation efficiency. Additionally, a learning process is updated to support the algorithm. Finally, a case study contains four different routes in Beijing city is used to validate the effectiveness and superiority of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.