388 results
Search Results
2. Speaking to algorithms? Rhetorical political analysis as technological analysis.
- Author
-
Dillet, Benoit
- Subjects
- *
POLITICAL science , *RHETORICAL analysis , *PERSUASION (Psychology) , *POLITICAL communication , *ALGORITHMS , *DIGITAL media - Abstract
In the last few years, research studies and opinion pieces have tried to account for the new polarisation and dealignment of US politics after Trump and the post-Brexit UK politics. It is now well-established both by academic research and by Facebook's own research that Facebook leads to more polarisation in its users' political views, but rhetorical analysis has not yet accounted for the role played by algorithms in political communication and persuasion. What does social media do to rhetoric? The situation of speech in social media is often treated like in a public sphere when it should not be. This misconception prevents rhetorical studies to take into consideration the question of technology. By using the recent literature in critical algorithm studies, I develop a new approach in rhetorical criticism. I argue here that the increasing agency that algorithms have acquired in delivering and mediating rhetoric means that we must consider the role played by intermediaries when examining rhetorical situations. This paper sheds light on what I call the four conditionalities of algorithms on rhetoric: (1) programmed speech content, (2) the verticalisation of political communication, (3) the new biases produced by digital media, and (4) the rhetorical machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Virtual focus groups for improved A-Z list user experience.
- Author
-
Kraft, Amanda, Scronce, Gretchen, and Jones, Allison
- Subjects
- *
ELECTRONIC information resources , *STUDENT engagement , *USER interfaces , *ALGORITHMS , *METHODOLOGY - Abstract
User focus groups can be a particularly effective way to design user-driven changes to library services. This paper explains how three faculty librarians designed and conducted virtual focus groups to improve the College of Charleston (CofC) Libraries' A-Z Database List; provides analysis of the data gathered via focus group sessions and a follow-up survey; details how this information has been used to make changes for the sake of student user experience (UX) and user interface (UI) design; and shares overall impressions and insights gained from the study. By inviting student participants to explore the list in a setting that was both structured and open-ended, librarians learned what mattered most to students using this resource. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Machine Learning and Natural Language Processing in Psychotherapy Research: Alliance as Example Use Case.
- Author
-
Goldberg, Simon B., Flemotomos, Nikolaos, Martinez, Victor R., Tanana, Michael J., Kuo, Patty B., Pace, Brian T., Villatte, Jennifer L., Georgiou, Panayiotis G., Van Epps, Jake, Imel, Zac E., Narayanan, Shrikanth S., and Atkins, David C.
- Subjects
- *
ALGORITHMS , *AUTOMATIC speech recognition , *AUTOMATION , *COMPUTER software , *CONFERENCES & conventions , *COUNSELING , *LINGUISTICS , *MACHINE learning , *MEDICAL research , *NATURAL language processing , *PSYCHOTHERAPY , *STATISTICS , *T-test (Statistics) , *THERAPEUTIC alliance , *DATA analysis - Abstract
Artificial intelligence generally and machine learning specifically have become deeply woven into the lives and technologies of modern life. Machine learning is dramatically changing scientific research and industry and may also hold promise for addressing limitations encountered in mental health care and psychotherapy. The current paper introduces machine learning and natural language processing as related methodologies that may prove valuable for automating the assessment of meaningful aspects of treatment. Prediction of therapeutic alliance from session recordings is used as a case in point. Recordings from 1,235 sessions of 386 clients seen by 40 therapists at a university counseling center were processed using automatic speech recognition software. Machine learning algorithms learned associations between client ratings of therapeutic alliance exclusively from session linguistic content. Using a portion of the data to train the model, machine learning algorithms modestly predicted alliance ratings from session content in an independent test set (Spearman's ρ =.15, p <.001). These results highlight the potential to harness natural language processing and machine learning to predict a key psychotherapy process variable that is relatively distal from linguistic content. Six practical suggestions for conducting psychotherapy research using machine learning are presented along with several directions for future research. Questions of dissemination and implementation may be particularly important to explore as machine learning improves in its ability to automate assessment of psychotherapy process and outcome. Public Significance Statement: Our study suggests that client-rated therapeutic alliance can be predicted using session content through machine learning models, albeit modestly. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. SMART IDENTIFICATION OF POWER QUALITY EVENTS USING NEW STOCKWELL TRANSFORM AND MACHINE LEARNING ALGORITHM.
- Author
-
Tariq, Muhammad and Mehmood, Tahir
- Subjects
- *
MACHINE learning , *ALGORITHMS , *ELECTRIC utilities , *METHODOLOGY , *FEATURE extraction - Abstract
Accurate detection, classification and mitigation of power quality (PQ) distortive events are of utmost importance for electrical utilities and corporations. An integrated mechanism is proposed in this paper for the identification of PQ distortive events. The proposed features are extracted from the waveforms of the distortive events using modified form of Stockwell's transform. The categories of the distortive events were determined based on these feature values by applying extreme learning machine as an intelligent classifier. The proposed methodology was tested under the influence of both the noisy and noiseless environments on a database of seven thousand five hundred simulated waveforms of distortive events which classify fifteen types of PQ events such as impulses, interruptions, sags and swells, notches, oscillatory transients, harmonics, and flickering as single stage events with their possible integrations. The results of the analysis indicated satisfactory performance of the proposed method in terms of accuracy in classifying the events in addition to its reduced sensitivity under various noisy environments. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. The use of text models in the formation of heuristics to solve tasks of diagnosing technical objects.
- Author
-
Korostil, Jerzy and Korostil, Olga
- Subjects
- *
HEURISTIC , *ALGORITHMS , *DECISION making , *PARAMETERS (Statistics) , *METHODOLOGY - Abstract
This paper describes research related to the use of heuristics in diagnostic tasks of complex technical objects. To build heuristics, the use of text models for technical objects is proposed. Therefore, this paper examines output methods of heuristics from text models and their transformation into logical formulae suitable for use in diagnostic algorithms. Analysis has been carried out for tasks solved during diagnostics, and methods of using heuristics in certain tasks have been reviewed. It is proposed to use heuristics for decision making while implementing certain algorithm steps of monitoring tasks for diagnostic parameters that are solved during diagnostics. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. Structural capacity in fire of laminated timber elements in compartments with exposed timber surfaces.
- Author
-
Wiesner, Felix, Bisby, Luke A., Bartlett, Alastair I., Hidalgo, Juan P., Santamaria, Simón, Deeny, Susan, and Hadden, Rory M.
- Subjects
- *
TIMBER , *TEMPERATURE , *METHODOLOGY , *FINITE element method , *ALGORITHMS - Abstract
Highlights • Cross-laminated timber compartment fire experimental results are presented. • In-depth temperature measurements during the burning and cooling phase. • Continued progression of heat into the timber during the cooling phase. • Load capacity reduces after auto-extinction of timber compartment fire. • Interaction of fire dynamics and load resistance of importance for mass timber. Abstract In compartment fires with boundaries consisting of exposed mass timber surfaces – for example in compartments with exposed cross-laminated timber (CLT) walls or floors – the thermal penetration depth, i.e. the depth of timber heated to temperatures significantly above ambient behind the char-timber interface, during fire exposure may have a significant influence on the load bearing capacity of structural mass timber buildings, particularly in the decay phase of a real fire. This paper presents in-depth timber temperature measurements obtained during a series of full-scale fire experiments in compartments with partially exposed CLT boundaries, including decay phases. During experiments in which the timber surfaces achieved auto-extinction after consumption of the compartment fuel load, the thermal penetration depth continued to increase for more than one hour, whilst the progression of the in-depth charring front effectively halted at extinction. A simple calculation model is presented to demonstrate that this ongoing progression of thermal penetration continues to reduce the structural load bearing capacity of the CLT elements, thereby increasing the potential for structural collapse during the decay phase of the fire. This issue is considered to be most important for timber compression elements. Currently utilised structural fire design methods for mass timber generally assume a fixed 'zero strength layer' depth to account for thermally affected timber behind the char line; however they make no explicit attempt to account for these decay-phase effects. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. A Two-Phase Branch-and-Price-and-Cut for a Dial-a-Ride Problem in Patient Transportation.
- Author
-
Luo, Zhixing, Liu, Mengyang, and Lim, Andrew
- Subjects
- *
PARATRANSIT services , *PUBLIC transit , *URBAN transportation , *ALGORITHMS , *METHODOLOGY - Abstract
In this paper, we investigate an extension of the R-DARP recently proposed by [Liu M, Luo Z, Lim A (2015) A branch-and-cut algorithm for a realistic dial-a-ride problem. Transporation Res. Part B: Methodological 81(1):267–288.]. The R-DARP, as a variant of the classic dial-a-ride problem (DARP), consists of determining a set of minimum-distance trips for vehicles to transport a set of clients from their origins to their destinations, subject to side constraints, such as capacity constraints, time window constraints, maximum riding time constraints, and manpower planning constraints. Our problem extends the R-DARP by (1) changing its objective to first maximizing the number of requests satisfied and then minimizing the total travel distance of the vehicles, and (2) generalizing the lunch break constraints of staff members. To solve this problem, we propose a two-phase branch-and-price-and-cut algorithm based on a strong trip-based model. The trip-based model is built on a set of nondominated trips, which are enumerated by an ad hoc label-setting algorithm in the first phase. Then we decompose the trip-based model by Benders decomposition and propose a branch-and-price-and-cut algorithm to solve the decomposed model in the second phase. Our two-phase exact algorithm is tested on the R-DARP benchmark instances and a set of new instances generated according to the same real-world data set as the R-DARP instances. Our algorithm quickly solves all of the R-DARP instances to optimality and outperforms the branch-and-cut proposed by Liu, Luo, and Lim. On the 42 new test instances, our algorithm solves 27 instances to optimality in four hours with the largest instance consisting of 36 requests. The online appendix is available at https://doi.org/10.1287/trsc.2017.0772. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Embedding of the Hamming space into a sphere with weighted quadrance metric and c-means clustering of nominal-continuous data.
- Author
-
Denisiuk, Aleksander and Grabowski, Michał
- Subjects
- *
DATA analysis , *METHODOLOGY , *CLUSTER analysis (Statistics) , *ALGORITHMS , *NUMERICAL analysis - Abstract
In this paper we present a new c-means clustering algorithm for combined continuous-nominal data. We use spherical representation of nominal data. The impact of specific features is modeled with corresponding weights in metric definition. To solve constrained minimization problem we transfer the methodology of reformulation functions to the considered context. As a result we obtain a clustering algorithm with adaptation of weights. Series of numerical experiments on real and synthetic data show that the algorithm can successfully cluster raw, non-normalized data. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Optimizing Storage Operations in Medium- and Long-Term Power System Models.
- Author
-
Wogrin, Sonja, Galbally, David, and Reneses, Javier
- Subjects
- *
ELECTRIC power systems research , *RENEWABLE energy sources , *METHODOLOGY , *ALGORITHMS , *ENERGY storage - Abstract
In this paper, we propose a new methodology to formulate storage behavior in medium- and long-term power system models that use a load duration curve. Traditionally in such models, the chronological information among individual hours is lost; information that is necessary to adequately model the operation of a storage facility. Therefore, these models are not fully capable of optimizing the actual operation of storage units, and often use pre-determined data or some sort of peak-shaving algorithm. In a rapidly changing power system, the proper characterization of storage behavior and its optimization becomes an increasingly important issue. This paper proposes a methodology to tackle the shortcomings of existing models. In particular, we employ the so-called system states framework to recover some of the chronological information within the load duration curve. This allows us to introduce a novel formulation for storage in a system states model. In a case study, we show that our method can lead to computational time reductions of over 90% while accurately replicating hourly behavior of storage levels. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
11. A Piecewise Solution to the Reconfiguration Problem by a Minimal Spanning Tree Algorithm.
- Author
-
Ramirez, Juan M. and Montoya, Diana P.
- Subjects
- *
SPANNING trees , *ALGORITHMS , *METHODOLOGY , *TOPOLOGY , *FEASIBILITY studies - Abstract
This paper proposes a minimal spanning tree (MST) algorithm to solve the networks' reconfiguration problem in radial distribution systems (RDS). The paper focuses on power losses' reduction by selecting the best radial configuration. The reconfiguration problem is a non-differentiable and highly combinatorial optimization problem. The proposed methodology is a deterministic Kruskal's algorithm based on graph theory, which is appropriate for this application generating only a feasible radial topology. The proposed MST algorithm has been tested on an actual RDS, which has been split into subsystems. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
12. CONVERGENCE AND CYCLING IN WALKER-TYPE SADDLE SEARCH ALGORITHMS.
- Author
-
LEVITT, ANTOINE and ORTNER, CHRISTOPH
- Subjects
- *
ALGORITHMS , *MAXIMA & minima , *MATHEMATICAL optimization , *STOCHASTIC convergence , *METHODOLOGY - Abstract
Algorithms for computing local minima of smooth objective functions enjoy a mature theory as well as robust and effcient implementations. By comparison, the theory and practice of saddle search is destitute. In this paper, we present results for idealized versions of the dimer and gentlest ascent (GAD) saddle search algorithms that showcase the limitations of what is theoretically achievable within the current class of saddle search algorithms: (1) we present an improved estimate on the region of attraction of saddles, (2) we give explicit examples of potential energy wells from which GAD-type dynamics are unable to escape, and (3) we present a local analysis of \singular points" around which the dynamics gets trapped and prove the existence of quasi-periodic solutions. These results indicate that it is impossible to obtain globally convergent variants of dimer- and GAD-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Sparse representation of multivariate extremes with applications to anomaly detection.
- Author
-
Goix, Nicolas, Sabourin, Anne, and Clémençon, Stephan
- Subjects
- *
MULTIVARIATE analysis , *EXTREME value theory , *METHODOLOGY , *ALGORITHMS , *DIMENSIONAL reduction algorithms - Abstract
Capturing the dependence structure of multivariate extreme events is a major concern in many fields involving the management of risks stemming from multiple sources, e.g., portfolio monitoring, insurance, environmental risk management and anomaly detection. One convenient (nonparametric) characterization of extreme dependence in the framework of multivariate Extreme Value Theory (EVT) is the angular measure, which provides direct information about the probable “directions” of extremes, i.e., the relative contribution of each feature/coordinate of the largest observations. Modeling the angular measure in high-dimensional problems is a major challenge for the multivariate analysis of rare events. The present paper proposes a novel methodology aiming at exhibiting a particular kind of sparsity within the dependence structure of extremes. This is achieved by estimating the amount of mass spread by the angular measure on representative sets of directions corresponding to specific sub-cones of R + d . This dimension reduction technique paves the way towards scaling up existing multivariate EVT methods. Beyond a non-asymptotic study providing a theoretical validity framework for our method, we propose as a direct application a first anomaly detection algorithm based on multivariate EVT. This algorithm builds a sparse normal profile of extreme behaviors, to be confronted with new (possibly abnormal) extreme observations. Illustrative experimental results provide strong empirical evidence of the relevance of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Common computational model for coupling panel method with finite element method.
- Author
-
Goetzendorf-Grabowski, Tomasz and Mieloszyk, Jacek
- Subjects
- *
FINITE element method , *ALGORITHMS , *METHODOLOGY , *AERODYNAMICS , *AIRCRAFT industry - Abstract
Purpose Conceptual and preliminary aircraft concepts are getting mature earlier in the design process, than ever before. To achieve that advanced level of maturity, multiple multidisciplinary analyses have to be done, often with usage of numerical optimization algorithms. This calls for right tools that can handle such a demanding task. Often the toughest part of a modern design is handling an aircraft’s computational models used for different analysis. Transferring geometry and loads from one program to another, or modifying internal structure, takes time and is not productive. Authors defined the concept of a common computational model (CCM), which couples programs from different aerospace scientific disciplines. Data exchange between the software components is compatible, and multidisciplinary analysis can be automated to high degree, including numerical optimization.Design/methodology/approach The panel method was applied to aerodynamic analysis and was coupled with open-source FEM code within one computational process.Findings The numerical results proved the effectiveness of developed methodology.Practical implications Developed software can be used within the design process of a new aircraft.Originality/value This paper presents an original approach for advanced numerical analysis, as well as for multidisciplinary optimization of an aircraft. The presented results show possible applications. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. Trajectory-Based Place-Recognition for Efficient Large Scale Localization.
- Author
-
Lynen, Simon, Bosse, Michael, and Siegwart, Roland
- Subjects
- *
ALGORITHMS , *METHODOLOGY , *LOCALIZATION (Mathematics) , *ALGEBRAIC geometry , *PRIME numbers - Abstract
Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, such threshold dependencies limit their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel match-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high match density that represent overlapping trajectory segments. The algorithm uses well-studied statistical tests to identify the relevant matching regions which are subsequently passed to an absolute pose algorithm to recover the geometric alignment. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences, including a comprehensive evaluation against ground-truth from publicly available datasets that shows our approach outperforms several state-of-the-art algorithms for place recognition. Furthermore we compare our overall algorithm to the currently best performing system for global localization and show how we outperform the approach on challenging indoor and outdoor datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Image-based magnification calibration for electron microscope.
- Author
-
Ito, Koichi, Suzuki, Ayako, Aoki, Takafumi, and Tsuneta, Ruriko
- Subjects
- *
CALIBRATION , *ELECTRON microscopes , *ALGORITHMS , *ESTIMATION theory , *IMAGING systems , *METHODOLOGY - Abstract
Magnification calibration is a crucial task for the electron microscope to achieve accurate measurement of the target object. In general, magnification calibration is performed to obtain the correspondence between the scale of the electron microscope image and the actual size of the target object using the standard calibration samples. However, the current magnification calibration method mentioned above may include a maximum of 5 % scale error, since an alternative method has not yet been proposed. Addressing this problem, this paper proposes an image-based magnification calibration method for the electron microscope. The proposed method employs a multi-stage scale estimation approach using phase-based correspondence matching. Consider a sequence of microscope images of the same target object, where the image magnification is gradually increased so that the final image has a very large scale factor $$S$$ (e.g., $$S=1\!,\!\!000$$) with respect to the initial image. The problem considered in this paper is to estimate the overall scale factor $$S$$ of the given image sequence. The proposed scale estimation method provides a new methodology for high-accuracy magnification calibration of the electron microscope. This paper also proposes a quantitative performance evaluation method of scale estimation algorithms using Mandelbrot images which are precisely scale-controlled images. Experimental evaluation using Mandelbrot images shows that the proposed scale estimation algorithm can estimate the overall scale factor $$S=1\!,\!\!000$$ with approximately 0.1 % scale error. Also, a set of experiments using image sequences taken by an actual scanning transmission electron microscope (STEM) demonstrates that the proposed method is more effective for magnification calibration of a STEM compared with a conventional method. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
17. An adaptive growing and pruning algorithm for designing recurrent neural network.
- Author
-
Han, Hong-Gui, Zhang, Shuo, and Qiao, Jun-Fei
- Subjects
- *
RECURRENT neural networks , *ALGORITHMS , *ALGEBRA , *METHODOLOGY , *ARTIFICIAL neural networks - Abstract
The training of recurrent neural networks (RNNs) concerns the selection of their structures and the connection weights. To efficiently enhance generalization capabilities of RNNs, a recurrent self-organizing neural networks (RSONN), using an adaptive growing and pruning algorithm (AGPA), is proposed for improving their performance in this paper. This AGPA can self-organize the structures of RNNs based on the information processing ability and competitiveness of hidden neurons in the learning process. Then, the hidden neurons of RSONN can be added or pruned to improve the generalization performance. Furthermore, an adaptive second-order algorithm with adaptive learning rate is employed to adjust the parameters of RSONN. And the convergence of RSONN is given to show the computational efficiency. To demonstrate the merits of RSONN for data modeling, several benchmark datasets and a real world application associated with nonlinear systems modeling problems are examined with comparisons against other existing methods. Experimental results show that the proposed RSONN effectively simplifies the network structure and performs better than some exiting methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. THE LOW RESPONSE SCORE (LRS).
- Author
-
ERDMAN, CHANDRA and BATES, NANCY
- Subjects
- *
RESPONSE rates , *CONTESTS , *ALGORITHMS , *SURVEYS , *METHODOLOGY ,UNITED States census - Abstract
In 2012, the US Census Bureau posed a challenge under the America COMPETES Act, an act designed to improve the competitiveness of the United States by investing in innovation through research and development. The Census Bureau contracted Kaggle.com to host and manage a worldwide competition to develop the best statistical model to predict 2010 Census mail return rates. The Census Bureau provided competitors with a block group-level database consisting of housing, demographic, and socioeconomic variables derived from the 2010 Census, five-year American Community Survey estimates, and 2010 Census operational data. The Census Bureau then challenged teams to use these data (and other publicly available data) to construct the models. One goal of the challenge was to leverage winning models as inputs to a new model-based hard-to-count (HTC) score, a metric to stratify and target geographic areas according to propensity to self-respond in sample surveys and censuses. All contest winners employed data-mining and machine-learning techniques to predict mail-return rates. This made the models relatively hard to interpret (when compared with the Census Bureau’s original HTC score) and impossible to directly translate to a new HTC score. Nonetheless, the winning models contained insights toward building a new model-based score using variables from the database. This paper describes the original algorithm-based HTC score, insights gained from the Census Return Rate Challenge, and the model underlying a new HTC score. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Multiple objects detection in biological images using a marked point process framework.
- Author
-
Descombes, Xavier
- Subjects
- *
IMAGING systems in biology , *CELLS , *ALGORITHMS , *DATA , *METHODOLOGY - Abstract
The marked point process framework has been successfully developed in the field of image analysis to detect a configuration of predefined objects. The goal of this paper is to show how it can be particularly applied to biological imagery. We present a simple model that shows how some of the challenges specific to biological data are well addressed by the methodology. We further describe an extension to this first model to address other challenges due, for example, to the shape variability in biological material. We finally show results that illustrate the MPP framework using the “simcep” algorithm for simulating populations of cells. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. k-Nearest Neighbor Neural Network Models for Very Short-Term Global Solar Irradiance Forecasting Based on Meteorological Data.
- Author
-
Chao-Rong Chen and Kartini, Unit Three
- Subjects
- *
METHODOLOGY , *PHOTOVOLTAIC power generation , *ALGORITHMS , *ROOT-mean-squares , *ELECTRICITY - Abstract
This paper proposes a novel methodology for very short term forecasting of hourly global solar irradiance (GSI). The proposed methodology is based on meteorology data, especially for optimizing the operation of power generating electricity from photovoltaic (PV) energy. This methodology is a combination of k-nearest neighbor (k-NN) algorithm modelling and artificial neural network (ANN) model. The k-NN-ANN method is designed to forecast GSI for 60 min ahead based on meteorology data for the target PV station which position is surrounded by eight other adjacent PV stations. The novelty of this method is taking into account the meteorology data. A set of GSI measurement samples was available from the PV station in Taiwan which is used as test data. The first method implements k-NN as a preprocessing technique prior to ANN method. The error statistical indicators of k-NN-ANN model the mean absolute bias error (MABE) is 42 W/m2 and the root-mean-square error (RMSE) is 242 W/m2. The models forecasts are then compared to measured data and simulation results indicate that the k-NN-ANN-based model presented in this research can calculate hourly GSI with satisfactory accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. Thinking critically about and researching algorithms.
- Author
-
Kitchin, Rob
- Subjects
- *
CRITICAL thinking , *ANALYTICAL skills , *ALGORITHMS , *THEORY of knowledge , *THOUGHT & thinking - Abstract
More and more aspects of our everyday lives are being mediated, augmented, produced and regulated by software-enabled technologies. Software is fundamentally composed of algorithms: sets of defined steps structured to process instructions/data to produce an output. This paper synthesises and extends emerging critical thinking about algorithms and considers how best to research them in practice. Four main arguments are developed. First, there is a pressing need to focus critical and empirical attention on algorithms and the work that they do given their increasing importance in shaping social and economic life. Second, algorithms can be conceived in a number of ways – technically, computationally, mathematically, politically, culturally, economically, contextually, materially, philosophically, ethically – but are best understood as being contingent, ontogenetic and performative in nature, and embedded in wider socio-technical assemblages. Third, there are three main challenges that hinder research about algorithms (gaining access to their formulation; they are heterogeneous and embedded in wider systems; their work unfolds contextually and contingently), which require practical and epistemological attention. Fourth, the constitution and work of algorithms can be empirically studied in a number of ways, each of which has strengths and weaknesses that need to be systematically evaluated. Six methodological approaches designed to produce insights into the nature and work of algorithms are critically appraised. It is contended that these methods are best used in combination in order to help overcome epistemological and practical challenges. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
22. Viral system algorithm: foundations and comparison between selective and massive infections.
- Author
-
Cortés, Pablo, García, José M, Muñuzuri, Jesús, and Guadix, José
- Subjects
- *
VIRUS diseases , *ALGORITHMS , *METHODOLOGY , *BIONICS , *MATHEMATICAL optimization , *BIOLOGICAL systems - Abstract
This paper presents a guided and deep introduction to viral systems (VS), a novel bio-inspired methodology based on a natural biological process taking part when the organism has to give a response to an external infection. VS has proven to be very efficient when dealing with problems of high complexity. The paper discusses on the foundations of VS, presents the main pseudocodes that need to be implemented and illustrates the methodology application. A comparison between VS and other metaheuristics, as well between different VS approaches is presented. Finally, trends and new research opportunities are presented for this bio-inspired methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
23. Hybrid fundamental-solution-based FEM for piezoelectric materials.
- Author
-
Cao, Changyong, Qin, Qing-Hua, and Yu, Aibing
- Subjects
- *
PIEZOELECTRIC materials , *HYBRID systems , *FINITE element method , *INTERPOLATION , *BOUNDARY value problems , *ALGORITHMS , *METHODOLOGY , *STRESS concentration - Abstract
In this paper, a new type of hybrid finite element method (FEM), hybrid fundamental-solution-based FEM (HFS-FEM), is developed for analyzing plane piezoelectric problems by employing fundamental solutions (Green's functions) as internal interpolation functions. A modified variational functional used in the proposed model is first constructed, and then the assumed intra-element displacement fields satisfying a priori the governing equations of the problem are constructed by using a linear combination of fundamental solutions at a number of source points located outside the element domain. To ensure continuity of fields over inter-element boundaries, conventional shape functions are employed to construct the independent element frame displacement fields defined over the element boundary. The proposed methodology is assessed by several examples with different boundary conditions and is also used to investigate the phenomenon of stress concentration in infinite piezoelectric medium containing a hole under remote loading. The numerical results show that the proposed algorithm has good performance in numerical accuracy and mesh distortion insensitivity compared with analytical solutions and those from ABAQUS. In addition, some new insights on the stress concentration have been clarified and presented in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
24. New lower bounds for certain classes of bin packing algorithms
- Author
-
Balogh, János, Békési, József, and Galambos, Gábor
- Subjects
- *
ALGORITHMS , *MATHEMATICAL bounds , *PERFORMANCE , *ASYMPTOTES , *METHODOLOGY , *PLANE curves - Abstract
Abstract: On-line algorithms have been extensively studied for the one-dimensional bin packing problem. In this paper, we investigate two classes of one-dimensional bin packing algorithms, and we give better lower bounds for their asymptotic worst-case behavior. For on-line algorithms so far the best lower bound was given by van Vliet in (1992) . He proved that there is no on-line bin packing algorithm with better asymptotic performance ratio than 1.54014…. In this paper, we give an improvement on this bound to and we investigate the parametric case as well. For those lists where the elements are preprocessed according to their sizes in non-increasing order, Csirik et al. (1983) proved that no on-line algorithm can have an asymptotic performance ratio smaller than . We improve this result to . [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
25. Handling uncertainties in vehicle routing problems through data preprocessing
- Author
-
Chardy, Matthieu and Klopfenstein, Olivier
- Subjects
- *
VEHICLE routing problem , *FEASIBILITY studies , *ALGORITHMS , *LABOR supply , *METHODOLOGY , *OPERATIONS management , *MONTE Carlo method - Abstract
Abstract: This paper presents a global preprocessing methodology for handling uncertainties in operations management. Beyond theoretical considerations on solution feasibility, the methodology provides practitioners with a Monte Carlo simulation-based framework for effective risk management. The main strength of this methodology is being easily applicable to almost any decision problem. Application field of the paper is a real-life workforce management problem for which we propose several mixed integer formulations as well as dedicated solution algorithms. Extensive numerical tests on real-life instances assess the benefit from preprocessing schemes when performed as recommended by our approach, and thus prove its practical relevance. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
26. Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm.
- Author
-
Deb, Kalyanmoy and Saha, Amit
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *METHODOLOGY , *EVOLUTIONARY computation , *MATHEMATICAL variables - Abstract
In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are de-emphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
27. Maintaining existing zoning systems using automated zone-design techniques: methods for creating the 2011 Census output geographies for England and Wales.
- Author
-
Cockings, Samantha, Harfoot, Andrew, Martin, David, and Hornby, Duncan
- Subjects
- *
ZONING , *CENSUS , *METHODOLOGY , *LOCAL knowledge , *ALGORITHMS - Abstract
Automated zone-design methods are increasingly being used to create zoning systems for a range of purposes, such as the release of census statistics or the investigation of neighbourhood effects on health, Inevitably, the characteristics originally underpinning the design of a zoning system (eg, population size or homogeneity of the built environment) change through time. Rather than designing a completely new system every time substantive change occurs, or retaining an existing system which will become increasingly unfit for purpose, an alternative is to modify the existing system such that zones which still meet the design criteria are retained, but those which are no longer fit for purpose are split or merged. This paper defines the first generic methodology for the automated maintenance of existing zoning systems. Using bespoke, publicly available, software (AZTo0I), the methodology is employed to modify the 2001 Census output geographies within six local authority districts in England and Wales in order to make them suitable for the release of contemporary population-related data. Automated maintenance of an existing system is found to be a more iterative and constrained problem than designing a completely new system; design constraints frequently have to be relaxed and manual intervention is occasionally required. Nonetheless, existing zone-design techniques can be successfully adapted and implemented to automatically maintain an existing system. The findings of this paper are of direct relevance both to the Office for National Statistics in their design of the 2011 Census output geographies for England and Wales and to any other countries or organisations seeking to maintain an existing zoning system. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
28. The Method of Manufactured Universes for validating uncertainty quantification methods
- Author
-
Stripling, H.F., Adams, M.L., McClarren, R.G., and Mallick, B.K.
- Subjects
- *
UNCERTAINTY (Information theory) , *METHODOLOGY , *ERROR analysis in mathematics , *PREDICTION models , *DIFFUSION , *SIMULATION methods & models , *GAUSSIAN processes , *ALGORITHMS - Abstract
Abstract: The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which “experimental” data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport “universe”, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new “experiments” within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
29. Improving visual analytics environments through a methodological framework for automatic clutter reduction
- Author
-
Bertini, Enrico and Santucci, Giuseppe
- Subjects
- *
VISUAL analytics , *METHODOLOGY , *DATA visualization , *RANDOM data (Statistics) , *DATA analysis , *ALGORITHMS - Abstract
Abstract: One of the main visual analytics characteristics is the tight integration between automatic computations and interactive visualization. This generally corresponds to the availability of powerful algorithms that allow for manipulating the data under analysis, transforming it in order to feed suitable visualizations. This paper focuses on more general purpose automatic computations and presents a methodological framework that can improve the quality of the visualizations adopted in the analytical process, using the dataset at hand and the actual visualization. In particular, the paper deals with the critical issue of visual clutter reduction, presenting a general strategy for analyzing and reducing it through random data sampling. The basic idea is to model the visualization in a virtual space in order to analyze both clutter and data features (e.g., absolute density, relative density, etc.). In this way we can measure the visual overlapping which may likely affects a visualization while representing a large dataset, obtaining precise visual quality metrics about the visualization degradation and devising automatic sampling strategies in order to improve the overall image quality. Metrics and algorithms have been tuned taking into account the results of suitable user studies. We will describe our proposal using two running case studies, one on 2D scatterplots and the other one on parallel coordinates. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
30. Guidance algorithms for proximity to target spacecraft.
- Author
-
Wu, Shunan, Sun, Zhaowei, Radice, Gianmarco, and Wu, Xiande
- Subjects
- *
SPACE vehicle orbits , *STRUCTURAL design , *ALGORITHMS , *METHODOLOGY , *TECHNICAL specifications , *PROXIMITY spaces - Abstract
Purpose - One of the primary problems in the field of on-orbit service and space conflict is related to the approach to the target. The development of guidance algorithms is one of the main research areas in this field. The objective of this paper is to address the guidance problem for autonomous proximity manoeuvres of a chase-spacecraft approaching a target spacecraft. Design/methodology/approach - The process of autonomous proximity is divided into three phases: proximity manoeuvre, fly-around manoeuvre, and final approach. The characteristics of the three phases are analyzed. Considering the time factor of autonomous proximity, different orbits for the three phases are planned. Different guidance algorithms, which are based on multi-pulse manoeuvres, are then devised. Findings - This paper proposes three phases of autonomous proximity and then designs a guidance method, which hinges on a multi-pulse algorithm and different orbits for the three phases; in addition, a method of impulse selection is devised. Practical implications - An easy methodology for the analysis and design of autonomous proximity manoeuvres is proposed, which could also be considered for other space applications such as formation flying deployment and reconfiguration. Originality/value - Based on this guidance method, the manoeuvre-flight period of the chase-spacecraft can be set in accordance with the mission requirements; the constraints on fuel mass and manoeuvre time are both considered and satisfied. Consequently, this proposed guidance method can effectively deal with the problem of proximity approach to a target spacecraft. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
31. Optimized combustion system in coal-fired boiler: using optimization control and fieldbus.
- Author
-
Geng Liang, Hong Wang, Wen Li, and Yan Bai
- Subjects
- *
MATHEMATICAL optimization , *COMBUSTION , *COAL-fired furnaces , *CONTROL theory (Engineering) , *MEASUREMENT , *METHODOLOGY , *ALGORITHMS - Abstract
This paper presented a comprehensive design and implementation of control and optimization for the combustion system in heat-supplying boilers based on a fieldbus, mainly including the air-to-fuel ratio optimization algorithm based on a PID controller and soft measurement methodology for airflow based on the orifice nature of the air pre-heater. After process description and general control schemes were presented, measurement and control principles for the system were given. After an optimization algorithm for air-to-fuel ratio with constant step was introduced briefly, an adaptive optimization algorithm based on a PID controller was proposed, and then detailed design was presented. Advantages of the proposed adaptive optimization algorithm can be seen after comparison with a former algorithm. Approaches for the soft measurement of airflow in a boiler system based on differential air pressure between the inlet and outlet of the air pre-heater and that of fuel were presented. The complete control strategies for the designed system were given. Implementation of the whole system by a Foundation Fieldbus, including structure design for the control system and soft configurations design for the proposed optimization algorithm, were presented at length. Control effects of the designed system were given, demonstrating tat the design of the whole system was a success by running it reliably for a long time. Some important conclusions on the proposed methodologies and designs in measurement and control using the fieldbus in combustion control were presented at the end of the paper. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
32. Probabilistic evaluation for the implicit limit-state function of stability of a highway tunnel in China
- Author
-
Su, Yong-Hua, Li, Xiang, and Xie, Zhi-Yong
- Subjects
- *
TUNNEL design & construction , *ALGORITHMS , *SHOTCRETE , *ENGINEERING , *MONTE Carlo method , *METHODOLOGY - Abstract
Abstract: This paper presents a practical algorithm for the first-order reliability method (FORM) to deal with the implicit nature of a limit-state function (LSF) in reliability assessment of stability of a working highway tunnel, which is constructed in a mountainous area in China. First, a mechanical model to determine the LSF that is not explicitly known with complex non-linear behaviour is formulated for the primary support provided by the combination of shotcrete and rockbolts. After reviewing concisely the basic concepts relevant to the FORM, the central difference approximation is subsequently introduced to estimate the partial derivatives of the LSF. By consideration of Taylor’s formula, this LSF can be transformed into an equation involving a single unknown described as the reliability index, and then the resulting solution procedure for determination of the reliability index can be rendered based on the derivation rule of compound function. A flow for tunnel reliability problem posed by the implicit LSF in applying the FORM is further developed. Eventually, the proposed methodology for the LSF in the non-explicit circumstance is used to perform the probabilistic evaluation for such a working tunnel where the choices of values of step length coefficient affecting the calculation results are suggested. Comparisons are made with the Monte-Carlo simulation method (MCSM) to assess the computational accuracy and efficiency of the algorithm proposed in this paper. It is shown that the MCSM used to obtain the “exact” solutions entails the formidable computational effort, whereas the current algorithm that alleviates the computational labour dramatically can provide an efficient way of implementing reliability calculations for the implicit LSF and offer results which are found to be in satisfactory agreement with the “exact” ones when the step length coefficient is properly taken. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
33. Optical measurements of tsunami inundation through an urban waterfront modeled in a large-scale laboratory basin
- Author
-
Rueben, M., Holman, R., Cox, D., Shin, S., Killian, J., and Stanley, J.
- Subjects
- *
TSUNAMI hazard zones , *WATERFRONTS , *EXPERIMENTS , *OPTICAL measurements , *METHODOLOGY , *ALGORITHMS , *CAMCORDERS , *LABORATORIES - Abstract
Abstract: This paper presents optical measurements of tsunami inundation through an urban waterfront in a laboratory wave basin. The physical model was constructed at 1:50 scale and was an idealization of the town of Seaside, Oregon. The fixed-bed model was designed to study the initial inundation zone along an urban waterfront, such that the flow around several large buildings could be observed. This paper presents an analysis of the optical measurements made with two overhead video cameras, focusing on tracking the leading edge of the tsunami inundation through the urban waterfront and quantifies the accuracy of the algorithm used to track the edge. The results show that the methodology provides high-resolution information in both time and space of the leading edge position, and that these data can be used to quantify the influence of large macro-roughness features on the tsunami inundation processes in laboratory settings. The overall effect of the macro-roughness was to decrease the bore propagation speed relative to the control section with no macro-roughness. The bore speed could be reduced by as much as 40% due to the presence of the macro-roughness relative to the control section. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
34. Forecasting daily potential evapotranspiration using machine learning and limited climatic data
- Author
-
Torres, Alfonso F., Walker, Wynn R., and McKee, Mac
- Subjects
- *
EVAPOTRANSPIRATION , *MACHINE learning , *IRRIGATION , *FORECASTING , *WEATHER , *ALGORITHMS , *METHODOLOGY , *STATISTICAL bootstrapping , *WATER management - Abstract
Abstract: Anticipating, or forecasting near-term irrigation demands is a requirement for improved management of conveyance and delivery systems. The most important component of a forecasting regime for irrigation is a simple, yet reliable, approach for forecasting crop water demands, which in this paper is represented by the reference or potential evapotranspiration (ETo). In most cases, weather data in the area is limited to a reduced number of variables measured, therefore current or future ETo estimation is restricted. This paper summarizes the results of testing of two proposed forecasting ETo schemes under the mentioned conditions. The first or “direct” approach involved forecasting ETo using historically computed ETo values. The second or “indirect” approach involved forecasting the required weather parameters for the ETo calculation based on historical data and then computing ETo. An statistical machine learning algorithm, the Multivariate Relevance Vector Machine (MVRVM) is applied to both of the forecastings schemes. The general ETo model used is the 1985 Hargreaves Equation which requires only minimum and maximum daily air temperatures and is thus well suited to regions lacking more comprehensive climatic data. The utility and practicality of the forecasting methodology is demonstrated with an application to an irrigation project in Central Utah. To determine the advantage and suitability of the applied algorithm, another learning machine, the Multilayer Perceptron (MLP), is used for comparison purposes. The robustness and stability of the proposed schemes are tested by the application of the bootstrap analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
35. The closed form t-T-P shifting (CFS) algorithm.
- Author
-
Gergesova, M., Zupancˇicˇ, B., Saprunov, I., and Emri, I.
- Subjects
- *
SHEAR (Mechanics) , *CURVES in engineering , *METHODOLOGY , *ALGORITHMS , *PRESSURE , *MATHEMATICAL functions , *TEMPERATURE - Abstract
Time-dependent material functions of engineering plastics within the exploitation range of temperatures extend over several decades of time. For this reason material characterization is carried out at different temperatures and/or pressures within a certain experimental window. Using the time-temperature and/or time-pressure superposition principle, these response function segments can be shifted along the logarithmic time-scale to obtain a master curve at selected reference conditions. This shifting is commonly performed manually ('by hand') and requires some experience. Unfortunately, manual shifting is not based on a commonly agreed mathematical procedure which would, for a given set of experimental data, yield always exactly the same master curve, independent of person who executes the shifting process. Thus, starting from the same set of experimental data two different researchers could, and very likely will, construct two different master curves. In this paper, we propose a closed form mathematical methodology (CFS) which completely removes ambiguity related to the manual shifting procedures. This paper presents the derivation of the shifting algorithm and its validation using several simulated- and real- experimental data. It has been shown that error caused by shifting performed with CFS is at least 10-50 times smaller then the underlying experimental error. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
36. Approximation of action theories and its application to conformant planning
- Author
-
Tu, Phan Huy, Son, Tran Cao, Gelfond, Michael, and Morales, A. Ricardo
- Subjects
- *
ACTION theory (Psychology) , *APPROXIMATION theory , *METHODOLOGY , *ARTIFICIAL intelligence , *CURVES in engineering , *LOGIC programming , *SEMANTICS , *ALGORITHMS , *INFORMATION theory - Abstract
Abstract: This paper describes our methodology for building conformant planners, which is based on recent advances in the theory of action and change and answer set programming. The development of a planner for a given dynamic domain starts with encoding the knowledge about fluents and actions of the domain as an action theory of some action language. Our choice in this paper is – an action language with dynamic and static causal laws and executability conditions. An action theory of defines a transition diagram containing all the possible trajectories of the domain. A transition belongs to iff the execution of the action a in the state s may move the domain to the state . The second step in the planner development consists in finding a deterministic transition diagram such that nodes of are partial states of , its arcs are labeled by actions, and a path in from an initial partial state to a partial state satisfying the goal corresponds to a conformant plan for and in . The transition diagram is called an ‘approximation’ of . We claim that a concise description of an approximation of can often be given by a logic program under the answer sets semantics. Moreover, complex initial situations and constraints on plans can be also expressed by logic programming rules and included in . If this is possible then the problem of finding a parallel or sequential conformant plan can be reduced to computing answer sets of . This can be done by general purpose answer set solvers. If plans are sequential and long then this method can be too time consuming. In this case, is used as a specification for a procedural graph searching conformant planning algorithm. The paper illustrates this methodology by building several conformant planners which work for domains with complex relationship between the fluents. The efficiency of the planners is experimentally evaluated on a number of new and old benchmarks. In addition we show that for a subclass of action theories of our planners are complete, i.e., if in we cannot get from to a state satisfying the goal then there is no conformant plan for and in . [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
37. SLOTS: Effective Algorithm for Sensor Placement in Water Distribution Systems.
- Author
-
Dorini, Gianluca, Jonkergouw, Philip, Kapelan, Zoran, and Savic, Dragan
- Subjects
- *
WATER pollution , *WATER distribution , *METHODOLOGY , *DETECTORS , *SENSOR networks , *ALGORITHMS - Abstract
This paper deals with methods aimed at the effective and efficient detection of accidental and/or intentional contaminant intrusion(s) in water distribution systems. The objective of this paper is to present a methodology entitled sensors local optimal transformation system (SLOTS) to address both single-objective and multiobjective sensor layout problems. SLOTS is tested on two benchmark water distribution networks used for the Battle of the Water Sensors Networks challenge (BWSN), held as part of the Water Distribution Systems Analysis Symposium, in Cincinnati in 2006. The objectives considered are detection likelihood and the expected population affected prior to detection. The results obtained demonstrate that SLOTS sensor placements are often near optimal. For both single-objective and multiobjective cases, SLOTS is shown to be capable of identifying placements which are consistently better performing than one of the best BWSN methodologies, the greedy algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
38. A novel image denoising algorithm in wavelet domain using total variation and grey theory.
- Author
-
Hong-jun Li, Zhi-min Zhao, and Xiao-lei Yu
- Subjects
- *
DIGITAL image processing , *ALGORITHMS , *WAVELETS (Mathematics) , *GIBBS phenomenon , *MATHEMATICAL models , *METHODOLOGY - Abstract
Purpose - The traditional total variation (TV) models in wavelet domain use thresholding directly in coefficients selection and show that Gibbs' phenomenon exists. However, the nonzero coefficient index set selected by hard thresholding techniques may not be the best choice to obtain the least oscillatory reconstructions near edges. This paper aims to propose an image denoising method based on TV and grey theory in the wavelet domain to solve the defect of traditional methods. Design/methodology/approach - In this paper, the authors divide wavelet into two parts: low frequency area and high frequency area; in different areas different methods are used. They apply grey theory in wavelet coefficient selection. The new algorithm gives a new method of wavelet coefficient selection, solves the nonzero coefficients sort, and achieves a good image denoising result while reducing the phenomenon of "Gibbs." Findings - The results show that the method proposed in this paper can distinguish between the information of image and noise accurately and also reduce the Gibbs artifacts. From the comparisons, the model proposed preserves the important information of the image very well and shows very good performance. Originality/value - The proposed image denoising model introducing grey relation analysis in the wavelet coefficients selecting and modifying is original. The proposed model provides a viable tool to engineers for processing the image. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
39. Ontological content-based filtering for personalised newspapers: A method and its evaluation.
- Author
-
Veronica Maidel, Peretz Shoval, Bracha Shapira, and Meirav Taieb-Maimon
- Subjects
- *
ELECTRONIC newspapers , *ONTOLOGY , *DESIGN , *METHODOLOGY , *INFORMATION filtering , *CONTENT filters (Computer science) , *ALGORITHMS , *MASS media - Abstract
Purpose - The purpose of this paper is to describe a new ontological content-based filtering method for ranking the relevance of items for readers of news items, and its evaluation. The method has been implemented in ePaper, a personalised electronic newspaper prototype system. The method utilises a hierarchical ontology of news; it considers common and related concepts appearing in a user's profile on the one hand, and in a news item's profile on the other hand, and measures the "hierarchical distances" between these concepts. On that basis it computes the similarity between item and user profiles and rank-orders the news items according to their relevance to each user. Design/methodology/approach - The paper evaluates the performance of the filtering method in an experimental setting. Each participant read news items obtained from an electronic newspaper and rated their relevance. Independently, the filtering method is applied to the same items and generated, for each participant, a list of news items ranked according to relevance. Findings - The results of the evaluations revealed that the filtering algorithm, which takes into consideration hierarchically related concepts, yielded significantly better results than a filtering method that takes only common concepts into consideration. The paper determined a best set of values (weights) of the hierarchical similarity parameters. It also found out that the quality of filtering improves as the number of items used for implicit updates of the profile increases, and that even with implicitly updated profiles, it is better to start with user-defined profiles. Originality/value - The proposed content-based filtering method can be used for filtering not only news items but items from any domain, and not only with a three-level hierarchical ontology but any-level ontology, in any language. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
40. Design of iterative ROI transmission tomography reconstruction procedures and image quality analysis.
- Author
-
Hamelin, Benoit, Goussard, Yves, Dussault, Jean-Pierre, Cloutier, Guy, Beaudoin, Gilles, and Soulez, Gilles
- Subjects
- *
TOMOGRAPHY , *ALGORITHMS , *MEDICAL radiography , *SIMULATION methods & models , *METHODOLOGY - Abstract
Purpose: An iterative edge-preserving CT reconstruction algorithm for high-resolution imaging of small regions of the field of view is investigated. It belongs to a family of region-of-interest reconstruction techniques in which a low-cost pilot reconstruction of the whole field of view is first performed and then used to deduce the contribution of the region of interest to the projection data. These projections are used for a high-resolution reconstruction of the region of interest (ROI) using a regularized iterative algorithm, resulting in significant computational savings. This paper examines how the technique by which the pilot reconstruction of the full field of view is obtained affects the total runtime and the image quality in the region of interest. Methods: Previous contributions to the literature have each focused on a single approach for the pilot reconstruction. In this paper, two such approaches are compared: the filtered backprojection and a low-resolution regularized iterative reconstruction method. ROI reconstructions are compared in terms of image quality and computational cost over simulated and physical phantom (Catphan600©) studies, in order to assess the compromises that most impact the quality of the ROI reconstruction. Results: With the simulated phantom, new artifacts that appear in the ROI images are caused by significant errors in the pilot reconstruction. These errors include excessive coarseness of the pilot image grid and beam-hardening artifacts. With the Catphan600 phantom, differences in the imaging model of the scanner and that of the iterative reconstruction algorithm cause dark border artifacts in the ROI images. Conclusions: Inexpensive pilot reconstruction techniques (analytical algorithms, very-coarse-grid penalized likelihood) are practical choices in many common cases. However, they may yield background images altered by edge degradation or beam hardening, inducing projection inconsistency in the data used for ROI reconstruction. The ROI images thus have significant streak and speckle artifacts, which adversely affect the resolution-to-noise compromise. In these cases, edge-preserving penalized-likelihood methods on not-too-coarse image grids prove to be more robust and provide the best ROI image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
41. Accurate size evaluation of cylindrical components.
- Author
-
Ramaswami, Hemant and Anand, Sam
- Subjects
- *
ENGINE cylinders , *METHODOLOGY , *ALGORITHMS , *MATHEMATICAL statistics - Abstract
The objective of this paper is to develop a methodology to calculate the size of a cylindrical profile accurately per American National Standards Institute (ANSI) standards. The ANSI Y14.5.1M–1994 standard defines the size of a cylinder as the size of the largest ball rolling on a spine such that all points on the surface of the cylinder are external to it or the size of the smallest ball rolling on a spine such that all points on the surface of the cylinder are internal to it. Current methods of size evaluation reduce the complexity of the spine and model it as a straight line. In this paper, a methodology is presented to evaluate the control points of the spine modeled as an open uniform B-spline curve of a prespecified degree based on points collected on the surface of the cylinder. This provides a quantitative measure of the size of the cylinder in accordance with ANSI standards. The formulations to evaluate the maximum inscribing spine and the minimum circumscribing spine are presented as multilevel optimization problems. The outer level optimization is used to identify the optimal set of control points for the spline representing the path of the rolling ball. The inner level optimization is used to find the nearest point on the spline corresponding to every point in the dataset. The optimization formulation presented in this paper has been used to calculate the true size of cylinders for several published, simulated, and real datasets. These results are then compared to traditional estimates for size of a cylinder. The results indicate that the method presented for calculating the size of a cylinder conforms better to the ANSI standards as compared to other methods, such as the maximum inscribed, minimum circumscribed, and least squares cylinders, which have been traditionally used as indicators of size of a cylinder []. Further analysis is presented to observe the effect of sample size on the results of the algorithm. It is observed that with an increase in the sample size, the difference between the results of the presented algorithm and the traditional methods increases with the presented method providing more accurate estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
42. Enhancing a core journal collection for digital libraries.
- Author
-
Kovacevic, Ana, Devedzic, Vladan, and Pocajt, Viktor
- Subjects
- *
DIGITAL libraries , *METHODOLOGY , *INFORMATION storage & retrieval systems , *TEXT mining , *SEMANTIC computing , *ALGORITHMS , *SEMANTIC networks (Information theory) , *INFORMATION resources management - Abstract
Purpose - This paper aims to address the problem of enhancing the selection of titles offered by a digital library, by analysing the differences in these titles when they are cited by local authors in their publications and when they are listed in the digital library offer. Design/methodology/approach - Text mining techniques were used to identify duplicate references. Moreover, the process of identifying syntactically different data was improved with the automated discovery of thesauri from correctly matched data, and the generated thesaurus was further used in semantic clustering. The results were effectively visually represented. Findings - The paper finds that the function based on the Jaro-Winkler algorithm may be efficiently used in the de-duplication process. A generated thesaurus that utilises domain-specific knowledge can also be used in the semantic clustering of references. It was shown that semantic clustering may be most useful in partitioning data, which is particularly significant when dealing with large amounts of data, which is usually the case. Moreover, those references that have the same or similar scores may be considered as candidate matches in the further de-duplication process. Finally, it proved to be a more efficient way of visually representing the results. Originality/value - This function can be implemented to enhance the selection of titles to be offered by a digital library, in terms of making that offer more compliant with what the library users frequently cite. Keywords Digital. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
43. Identification of segments and optimal isolation valve system design in water distribution networks.
- Author
-
Giustolisi, Orazio and Savic, Dragan
- Subjects
- *
METHODOLOGY , *WATER-pipe valves , *VALVES , *WATER distribution , *WATER-supply engineering , *ALGORITHMS , *GENETIC algorithms - Abstract
This paper presents a novel methodology for assessing an isolation valve system and the portions of a water distribution network (segments) directly isolated by valve closure. Planned (e.g. regular maintenance) and unplanned interruptions (e.g. pipe burst) occur regularly in water distribution networks, making it necessary to isolate pipes. To isolate a pipe in the network, it is necessary to close a subset of valves which directly separate a small portion of the network, i.e., causing minimum possible disruption. This is not always straightforward to achieve as the valve system is not normally designed to isolate each pipe separately (i.e. having two valves at the end of each pipe). Therefore, for management purposes, it is important to identify the association between each subset of valves and the segments directly isolated by closing them. Furthermore, it is also important to improve the design of the isolation valve system in order to increase network reliability. Thus, this paper describes an algorithm for identifying the association between valves and isolated segments. The approach is based on the use of topological matrices of a network whose topology is modified in order to account for the existence of the valve system. The algorithm is demonstrated on a simple network and tested on an Apulian network where the isolation valve system is designed using a classical multi-objective optimisation using genetic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
44. Towards high performance computing for molecular structure prediction using IBM Cell Broadband Engine - an implementation perspective.
- Author
-
Krishnan, S. P. T., Sim Sze Liang, and Veeravalli, Bharadwaj
- Subjects
- *
RNA , *ALGORITHMS , *MOLECULAR biology , *METHODOLOGY , *CELLS , *DYNAMIC programming - Abstract
Background: RNA structure prediction problem is a computationally complex task, especially with pseudo-knots. The problem is well-studied in existing literature and predominantly uses highly coupled Dynamic Programming (DP) solutions. The problem scale and complexity become embarrassingly humungous to handle as sequence size increases. This makes the case for parallelization. Parallelization can be achieved by way of networked platforms (clusters, grids, etc) as well as using modern day multi-core chips. Methods: In this paper, we exploit the parallelism capabilities of the IBM Cell Broadband Engine to parallelize an existing Dynamic Programming (DP) algorithm for RNA secondary structure prediction. We design three different implementation strategies that exploit the inherent data, code and/or hybrid parallelism, referred to as C-Par, D-Par and H-Par, and analyze their performances. Our approach attempts to introduce parallelism in critical sections of the algorithm. We ran our experiments on SONY Play Station 3 (PS3), which is based on the IBM Cell chip. Results: Our results suggest that introducing parallelism in DP algorithm allows it to easily handle longer sequences which otherwise would consume a large amount of time in single core computers. The results further demonstrate the speed-up gain achieved in exploiting the inherent parallelism in the problem and also elicits the advantages of using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA. Conclusion: The speed-up performance reported here is promising, especially when sequence length is long. To the best of our literature survey, the work reported in this paper is probably the first-of-its-kind to utilize the IBM Cell Broadband Engine (a heterogeneous multi-core chip) to implement a DP. The results also encourage using multi-core platforms towards designing more sophisticated methodologies for handling a fairly long sequence of RNA to predict its secondary structure. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
45. Minimum entropy based run-to-run control for semiconductor processes with uncertain metrology delay
- Author
-
Zhang, Jianhua, Chu, Chih-Chiang, Munoz, Jose, and Chen, Junghui
- Subjects
- *
ENTROPY (Information theory) , *STATISTICAL process control , *SEMICONDUCTORS , *METHODOLOGY , *PRODUCT quality , *ALGORITHMS - Abstract
Abstract: A novel run-to-run control methodology for semiconductor processes with uncertain metrology delay which is developed by combining the minimum error entropy and the optimal control strategy is presented. In most semiconductor processes, the product quality data from the previous run are not often available before the start of the next run. Thus, the corrective step is often delayed by one batch or more, and the duration of the delay is uncertain with stochastic characteristics. Coupled with inaccurate process models, the delay may lead to significant variations of the process outputs even with the use of exponentially weighted moving average (EWMA) controllers. This paper proposes a new method of handling the uncertain metrology delay from a probability viewpoint. The fundamentals of the run-to-run control systems are first reexamined, and then an innovative performance index is given by incorporating the entropy (or information potential) and the mean value of tracking error with constraints on control input energy. The probability density function (PDF) based optimal control algorithm is proposed for processes where the disturbance and delay are non-Gaussian and the stability of the algorithm is analyzed. In addition, the methodology of the proposed control strategy is extended to include recursive PDF estimation and on-line real time implementation. The paper also includes a simulation example of minimum entropy control of a tungsten chemical-vapor deposition process to illustrate the methodology. Furthermore, comparisons between the conventional EWMA method and the proposed method are done to show the advantages of our newly proposed method. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
46. A General Algorithm for Univariate Stratification.
- Author
-
Baillargeon, Sophie and Rivest, Louis-Paul
- Subjects
- *
ALGORITHMS , *POPULATION , *MATHEMATICAL variables , *METHODOLOGY , *GEOMETRY - Abstract
This paper presents a general algorithm for constructing strata in a population using X, a univariate stratification variable known for all the units in the population. Stratum h consists of all the units with an X value in the interval . The stratum boundaries are obtained by minimizing the anticipated sample size for estimating the population total of a survey variable Y with a given level of precision. The stratification criterion allows the presence of a take-none and of a take-all stratum. The sample is allocated to the strata using a general rule that features proportional allocation, Neyman allocation, and power allocation as special cases. The optimization can take into account a stratum-specific anticipated non-response and a model for the relationship between the stratification variable X and the survey variable Y. A loglinear model with stratum-specific mortality for Y given X is presented in detail. Two numerical algorithms for determining the optimal stratum boundaries, attributable to Sethi and Kozak, are compared in a numerical study. Several examples illustrate the stratified designs that can be constructed with the proposed methodology. All the calculations presented in this paper were carried out with , an R package that will be available on CRAN (Comprehensive R Archive Network). [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
47. Customer choice of reliability in spinning reserve procurement and cost allocation using well-being analysis
- Author
-
Ahmadi-Khatir, A., Fotuhi-Firuzabad, M., and Goel, L.
- Subjects
- *
CONSUMER attitudes , *ELECTRIC utilities , *MARKETING , *RELIABILITY in engineering , *COST allocation , *ALGORITHMS , *METHODOLOGY , *DEREGULATION - Abstract
Abstract: A novel pool-based market-clearing algorithm for spinning reserve (SR) procurement and the cost allocation associated with provision of spinning reserve among customers (DisCos) is developed in this paper. Rational buyer market model is used to clear energy and spinning reserve markets in the proposed algorithm. This market model gives DisCos the opportunity to declare their own energy requirement together with their desired reliability levels to the ISO and also they can participate in the SR market as a interruptible load. The DisCos’ desired reliability levels are selected from a hybrid deterministic/probabilistic framework designated as the system well-being model. Using the demand of each DisCo and its associated desired reliability level, the overall desired system reliability level is determined. The market operator then purchases spinning reserve commodity from the associated market such that the overall desired system reliability level is satisfied. A methodology is developed in this paper to fairly allocate the cost associated with providing spinning reserve among DisCos based on their demands and desired reliability levels. An algorithm is also presented in this paper for implementing the proposed approach. The effectiveness of the proposed technique is examined using the IEEE-RTS. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
48. A Neutral Network for Real-Time Retrievals of PWV and LWP From Arctic Millimeter-Wave Ground-Based Observations.
- Author
-
Cadeddu, Maria R., Turner, David D., and Liljegren, James C.
- Subjects
- *
ARTIFICIAL neural networks , *ALGORITHMS , *PRECIPITABLE water , *METHODOLOGY , *ERROR analysis in mathematics , *HUMIDITY - Abstract
This paper presents a new neural network (NN) algorithm for real-time retrievals of low amounts of precipitable water vapor (PWV) and integrated liquid water from millimeter-wave ground-based observations. Measurements are collected by the 183.3-GHz G-band vapor radiometer (GVR) operating at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility, Barrow, AK. The NN provides the means to explore the nonlinear regime of the measurements and investigate the physical boundaries of the operability of the instrument. A methodology to compute individual error bars associated with the NN output is developed, and a detailed error analysis of the network output is provided. Through the error analysis, it is possible to isolate several components contributing to the overall retrieval errors and to analyze the dependence of the errors on the inputs. The network outputs and associated errors are then compared with results from a physical retrieval and with the ARM two-channel microwave radiometer (MWR) statistical retrieval. When the NN is trained with a seasonal training data set, the retrievals of water vapor yield results that are comparable to those obtained from a traditional physical retrieval, with a retrieval error percentage of ~5% when the PWV is between 2 and l0 mm, but with the advantages that the NN algorithm does not require vertical profiles of temperature and humidity as input and is significantly faster computationally. Liquid water path (LWP) retrievals from the NN have a significantly improved clear-sky bias (mean of ~2.4 g/m²) and a retrieval error varying from 1 to about 10 g/ms when the PWV amount is between 1 and 10 mm. As an independent validation of the LWP retrieval, the Iongwave downwelling surface flux was computed and compared with observations. The comparison shows a significant improvement with respect to the MWR statistical retrievals, particularly for LWP amounts of less than 60 g/m². This paper shows that the GVR alone can provide overall improved PWV and LWP retrievals when the PWV amount is less than 10 mm, and, when combined with the MWR, can provide improved retrievals over the whole water-vapor range. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
49. Cooperative Localization for Autonomous Underwater Vehicles.
- Author
-
Bahr, Alexander, Leonard, John J., and Fallon, Maurice F.
- Subjects
- *
SOFTWARE localization , *SUBMERSIBLES , *ROBOT control systems , *AUTOMOTIVE navigation systems , *BANDWIDTHS , *METHODOLOGY , *MACHINE theory , *ROBOTICS , *CONFIGURATION management , *ALGORITHMS - Abstract
This paper describes an algorithm for distributed acoustic navigation for Autonomous Underwater Vehicles (AUVs). Whereas typical AUV navigation systems utilize pre-calibrated arrays of static transponders, our work seeks to create a fully mobile network of AUVs that perform acoustic ranging and data exchange with one another to achieve cooperative positioning for extended duration missions over large areas. The algorithm enumerates possible solutions for the AUV trajectory based on dead-reckoning and range-only measurements provided by acoustic modems that are mounted on each vehicle, and chooses the trajectory via minimization of a cost function based on these constraints. The resulting algorithm is computationally efficient, meets the strict bandwidth requirements of available AUV modems, and has potential to scale well to networks of large numbers of vehicles. The method has undergone extensive experimentation, and results from three different scenarios are reported in this paper, each of which utilizes MIT SCOUT Autonomous Surface Craft (ASC) as convenient platforms for testing. In the first experiment, we utilize three ASCs, each equipped with a Woods Hole acoustic modem, as surrogates for AUVs. In this scenario, two ASCs serve as Communication/Navigation Aids (CNAs) for a third ASC that computes its position based exclusively on GPS positions of the CNAs and acoustic range measurements between platforms. In the second scenario, an undersea glider is used in conjunction with two ASCs serving as CNAs. Finally, in the third experiment, a Bluefin12 AUV serves as the target vehicle. All three experiments demonstrate the successful operation of the technique with real ocean data. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
50. Method for Prediction and Optimization of a Stratospheric Balloon Ascent Trajectory.
- Author
-
Morani, Gianfranco, Palumbo, Roberto, Cuciniello, Giovanni, Corraro, Federico, and Russo, Michelangelo
- Subjects
- *
METHODOLOGY , *MATHEMATICAL optimization , *STATISTICS , *ALGORITHMS , *FLIGHT , *TRAJECTORY optimization - Abstract
In this paper, we propose a methodology for the prediction and optimization of the ascent trajectory of a stratospheric balloon to target a specified three-dimensional area. The methodology relies mainly on the Analysis Code for High-Altitude Balloons, a simulation tool for the prediction of flight trajectory and thermal behavior of high-altitude, zero-pressure balloons, and on a statistical analysis for estimating the trajectory prediction errors. The paper also describes the algorithms used for balloon parameter optimization to obtain a flight trajectory that reaches a predefined target area without any ballast drop or gas venting control. The proposed methodology was successfully used during the first Dropped Transonic Flight Test of the Flying Test Bed 1 demonstrator accomplished on 24 February 2007 by the Italian Aerospace Research Center. The reported postflight analysis of all the test campaign demonstrates that the proposed methodology for trajectory prediction and optimization guarantees very satisfactory and reliable results for both the selection of the best day to perform such a mission and the definition of the correct balloon parameters to be used for targeting a predefined three-dimensional area. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.