418 results
Search Results
2. TP Model Transformation as a Way to LMJ.-Based Controller Design.
- Author
-
Baranyi, Peter
- Subjects
COMPUTATIONAL complexity ,ELECTRONIC data processing ,MATRICES (Mathematics) ,TENSOR products ,LINEAR algebra ,COMPUTER software - Abstract
The main objective of this paper is to propose a numerical controller design methodology. This methodology has two steps. In the first step, tensor product (TP) model transformation is applied, which is capable of transforming a dynamic system model, given over a bounded domain, into TP model form, including polytopic or Takagi-Sugeno model forms. Then, in the second step, Lyapunov's controller design theorems are utilized in the form of linear matrix inequalities (LMIs). The main novelty of this paper is the development of the TP model transformation of the first step. It does not merely transform to TP model form, but it automatically prepares the transformed model to all the specific conditions required by the LMI design. The LMI design can, hence, be immediately executed on the result of the TP model transformation. The secondary objective of this paper is to discuss that representing a dynamic model in TP model form needs to consider the tradeoff between the modeling accuracy and computational complexity. Having a controller with low computational cost is highly desired in many cases of real implementations. The proposed TP model transformation is developed and specialized for finding a complexity minimized model according to a given modeling accuracy. Detailed control design examples are given. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
3. Communication and Synchronization in Distributed Systems.
- Author
-
Silberschatz, Abraham
- Subjects
DISTRIBUTED computing ,ELECTRONIC data processing ,PROGRAMMING languages ,MICROPROCESSORS ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
Recent advances in technology have made the construction of general-purpose systems out of many small independent microprocessors feasible. One of the issues concerning distributed systems is the question of appropriate language constructs for the handling of communication and synchronization. In his paper, "Communicating sequential processes," Hoare has suggested the use of the input and output constructs and Dijkstra's guarded commands to handle these two issues. This paper examines Hoare's concepts in greater detail by concentrating on the following two issues: 1) allowing both input and output commands to appear in guards, 2) simple abstract implementation of the input and output constructs. In the process of examining these two issues we develop a framework for the design of appropriate communication and synchronization facilities for distributed systems. [ABSTRACT FROM AUTHOR]
- Published
- 1979
4. Contextual Resource Negotiation-Based Task Allocation and Load Balancing in Complex Software Systems.
- Author
-
Yichuan Jiang and Jiuchuan Jiang
- Subjects
COMPUTER software ,ELECTRONIC systems ,TECHNOLOGICAL innovations ,SYSTEMS design ,COMPUTER systems ,ELECTRONIC data processing ,INFORMATION resources management - Abstract
In the complex software systems, software agents always need to negotiate with other agents within their physical and social contexts when they execute tasks. Obviously, the capacity of a software agent to execute tasks is determined by not only itself but also its contextual agents; thus, the number of tasks allocated on an agent should be directly proportional to its self-owned resources as well as its contextual agents' resources. This paper presents a novel task allocation model based on the contextual resource negotiation. In the presented task allocation model, while a task comes to the software system, it is first assigned to a principal agent that has high contextual enrichment factor for the required resources; then, the principal agent will negotiate with its contextual agents to execute the assigned task. However, while multiple tasks come to the software system, it is necessary to make load balancing to avoid overconvergence of tasks at certain agents that are rich of contextual resources. Thus, this paper also presents a novel load balancing method: if there are overlarge number of tasks queued for a certain agent, the capacities of both the agent itself and its contextual agents to accept new tasks will be reduced. Therefore, in this paper, the task allocation and load balancing are implemented according to the contextual resource distribution of agents, which can be well suited for the characteristics of complex software systems; and the presented model can reduce more communication costs between allocated agents than the previous methods based on self-owned resource distribution of agents. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
5. Miss Rate Prediction Across Program Inputs and Cache Configurations.
- Author
-
Yutao Zhong, Dropsho, Steven G., Xipeng Shen, Studer, Ahren, and Chen Ding
- Subjects
CACHE memory ,ELECTRONIC data processing ,COMPUTER storage devices ,COMPUTER software ,COMPUTER algorithms ,DATA structures ,ELECTRONIC file management ,VISUAL programming languages (Computer science) ,HIGH technology industries - Abstract
Improving cache performance requires understanding cache behavior. However, measuring cache performance for one or two data input sets provides little insight into how cache behavior varies across all data input sets and all cache configurations. This paper uses locality analysis to generate a parameterized model of program cache behavior. Given a cache size and associativity, this model predicts the miss rate for arbitrary data input set sizes. This model also identifies critical data input sizes where cache behavior exhibits marked changes. Experiments show this technique is within 2 percent of the hit rate for set associative caches on a set of floating-point and integer programs using array and pointer-based data structures. Building on the new model, this paper presents an interactive visualization tool that uses a three-dimensional plot to show miss rate changes across program data sizes and cache sizes and its use in evaluating compiler transformations. Other uses of this visualization tool include assisting machine and benchmark-set design. The tool can be accessed on the Web at http://www.cs.rochester.edu/research/locality. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
6. Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization.
- Author
-
Min-Ling Zhang and Zhi-Hua Zhou
- Subjects
ARTIFICIAL neural networks ,COMPUTER algorithms ,MACHINE learning ,COMPUTER programming ,COMPUTER software ,ELECTRONIC data processing ,MACHINE theory ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., Backpropagation for Multilabel Learning, is proposed. It is derived from the popular Backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
7. Shallow Knowledge as an Aid to Deep Understanding in Early Phase Requirements Engineering.
- Author
-
Sawyer, Pete, Rayson, Paul, and Cosh, Ken
- Subjects
PROGRAMMING languages ,ELECTRONIC data processing ,ENGINEERING design ,COMPUTER software ,COMPUTER software development ,ELECTRONIC systems ,COMPUTER networks ,SEMANTIC network analysis ,ENGINEERS - Abstract
Requirements engineering's continuing dependence on natural language description has made it the focus of several efforts to apply language engineering techniques. The raw textual material that forms an input to early phase requirements engineering and which informs the subsequent formulation of the requirements is inevitably uncontrolled and this makes its processing very hard. Nevertheless, sufficiently robust techniques do exist that can be used to aid the requirements engineer provided that the scope of what can be achieved is understood. In this paper, we show how combinations of lexical and shallow semantic analysis techniques developed from corpus linguistics can help human analysts acquire the deep understanding needed as the first step towards the synthesis of requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
8. Genre Analysis in Technical Communication.
- Author
-
Luzón, María José
- Subjects
COMMUNICATION of technical information ,PROGRAMMING languages ,ELECTRONIC data processing ,ARTIFICIAL languages ,COMPUTER programmers ,COMPUTER software - Abstract
An increasing body of research relies on genre to analyze academic and professional communication and to describe how members of a community use language. The purpose of this paper is to provide a review of genre-based research in technical communication and to describe the different approaches to genre and to genre teaching. While some research focuses on the textual analysis of genres, other studies focus on the analysis of the social context and the ideology and structure of the discourse community that owns the genre, and on the role of genres as social rhetorical actions of the community. These two perspectives are also reflected in the teaching of genre in technical communication. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
9. A Transient-Chaotic Autoassociative Network (TCAN) Based on Le Oscillators.
- Author
-
Lee, Raymond S.T.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,DIGITAL computer simulation ,ELECTRONIC data processing ,SIMULATION methods & models ,COMPUTER software - Abstract
In the past few decades, neural networks have been extensively adopted in various applications ranging from simple synaptic memory coding to sophisticated pattern recognition problems such as scene analysis. Moreover, current studies on neuroscience and physiology have reported that in a typical scene segmentation problem our major senses of perception (e.g., vision, olfaction, etc.) are highly involved in temporal (or what we call "transient") nonlinear neural dynamics and oscillations. This paper is an extension of the author's previous work on the dynamic neural model (EGDLM) of memory processing and on coin, site neural oscillators for scene segmentation. Moreover, it is inspired by the work of Aihara et aL and Wang on chaotic neural oscillators in pat. tern association. In this paper, the author proposes a new transient chaotic neural oscillator, namely the "Lee oscillator," to provide temporal neural coding and an information processing scheme. To illustrate its capability for memory association, a chaotic autoassociative network, namely the Transient-Chaotic Auto-associative Network (TCAN) was constructed based on the Lee oscillator. Different from classical autoassociators such as the celebrated Hopfield network, which provides a "time-independent" pattern association, the TCAN provides a remarkable progressive memory association scheme [what we call "progressive memory recalling" (PMR)] during the transient chaotic memory association. This is exactly consistent with the latest research in psychiatry and perception psychology on dynamic memory recalling schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
10. Common-Case Computation: A High--Level Energy and Performance Optimization Technique.
- Author
-
Lakshminarayana, Ganesh, Raghunathan, Anand, Khouri, Kamal S., Jha, Niraj K., and Dey, Sujit
- Subjects
ELECTRONIC data processing ,COMPUTER software ,ALGORITHMS ,HARDWARE ,ELECTRONIC systems ,INTEGRATED circuits - Abstract
This paper proposes a novel circuit design methodology, called common-case computation (CCC)-based design, and new design automation algorithms for optimizing energy consumption and performance. The proposed techniques are applicable in conjunction with any high-level design methodology, where a structural register-transfer level (RTL) description and its corresponding scheduled behavioral (cycle-accurate functional) description are available. It is a well-known fact that in behavioral descriptions of hardware circuits (and also in software programs), a small set of computations often account for most of the computational complexity. However, in the hardware implementations (structural RTL or lower level), the common cases and the remaining computations are typically treated alike. This paper shows that identifying and exploiting common cases during the design process can lead to implementations that are much more efficient in terms of energy consumption and performance. We propose a new high-level design methodology with the following steps: 1) extraction of common-case detection and execution behaviors from the scheduled description; 2) simplification of the common-case behaviors in a stand-alone manner; 3) synthesis of common-case detection circuits and common-case execution circuits from the common-case behaviors; and 4) composing the original design with the common-case circuits to result in an optimized design. We demonstrate that the optimized designs reduce energy consumption by up to 59.8%, and simultaneously improve performance by up to 76.6%, compared with designs derived without special regard for common cases. The simultaneous improvements in energy and performance result in energy-delay product improvements of up to 90.6%. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
11. Consistency Issues in Distributed Checkpoints.
- Author
-
Hélary, Jean-Michel, Netzer, Robert H. B., and Raynal, Michel
- Subjects
FAULT-tolerant computing ,ELECTRONIC data processing ,COMPUTER software ,COMPUTER reliability ,COMPUTER security ,RELIABILITY in engineering - Abstract
A global checkpoint is a set of local checkpoints, one per process. The traditional consistency criterion for global checkpoints states that a global checkpoint is consistent if it does not include messages received and not sent. This paper investigates other consistency criteria, transitlessness, and strong consistency. A global checkpoint is transitless if it does not exhibit messages sent and not received. Transitlessness can be seen as a dual of traditional consistency. Strong consistency is the addition of transitlessness to traditional consistency. The main result of this paper is a statement of the necessary and sufficient condition answering the following question: "Given an arbitraiy set of local checkpoints, can this set be extended to a global checkpoint that satisfies P" (where P is traditional consistency, transitlessness, or strong consistency). From a practical point of view, this condition, when applied to transitlessness, is particularly interesting as it helps characterize which messages do not need to be recorded by checkpointing protocols. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
12. Use of Common Time Base for Checkpointing and Rollback Recovery in a Distributed System.
- Author
-
Ramanathan, Parameswaran and Shin, Kang G.
- Subjects
DISTRIBUTED computing ,ELECTRONIC data processing ,COMPUTER systems ,COMPUTER software ,SOFTWARE engineering ,ELECTRONIC systems - Abstract
A new approach for checkpointing and rollback recovery in a distributed computing system using a common time base is proposed in this paper. First, a common time base is established in the system using a hardware clock synchronization algorithm. This common time base is coupled with the idea of pseudo-recovery points to develop a checkpointing algorithm that has the following advantages: 1) reduced wait for commitment for establishing recovery lines, 2) fewer messages to be exchanged, and 3) less memory requirement. These advantages are assessed quantitatively by developing a probabilistic model. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
13. An Approach to the Modeling of Software Testing with Some Applications.
- Author
-
Downs, Thomas
- Subjects
COMPUTER software ,SOFTWARE engineering ,DEBUGGING ,DATA editing ,ELECTRONIC data processing ,RELIABILITY in engineering - Abstract
In this paper, an approach to the modeling of software testing is described. A major aim of this approach is to allow the assessment of the effects of different testing (and debugging) strategies in different situations. It is shown how the techniques developed can be used to estimate, prior to the commencement of testing, the optimum allocation of test effort for software which is to be nonuniformly executed in its operational phase. In addition, the question of application of statistical models in cases where the data environment undergoes changes is discussed. Finally, two models are presented for the assessment of the effects of imperfections in the debugging process. [ABSTRACT FROM AUTHOR]
- Published
- 1985
14. Reliable Resource Allocation Between Unreliable Processes.
- Author
-
Shrivastava, Santosh Kumar and Banâtre, Jean-Pierre
- Subjects
FAULT-tolerant computing ,ELECTRONIC data processing ,COMPUTER reliability ,COMPUTER programming ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
Basic error recovery problems between interacting processes are first discussed and the desirability of having separate recovery mechanisms for cooperation and competition is demonstrated. The paper then concentrates on recovery mechanisms for processes competing for the use of the shared resources of a computer system. Appropriate programming language features are developed based on the class and inner features of SIMULA, and on the structuring concepts of recovery blocks and monitors. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
15. Global Exponential Stability and Periodicity of Recurrent Neural Networks With Time Delays.
- Author
-
Jinde Cao and Jun Wang
- Subjects
ARTIFICIAL neural networks ,NEURAL circuitry ,ARTIFICIAL intelligence ,COMPUTER software ,DIGITAL computer simulation ,ELECTRONIC data processing - Abstract
In this paper, the global exponential stability and periodicity of a class of recurrent neural networks with time delays are addressed by using Lyapunov functional method and inequality techniques. The delayed neural network includes the well-known Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks as its special cases. New criteria are found to ascertain the global exponential stability and periodicity of the recurrent neural networks with time delays, and are also shown to be different from and improve upon existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
16. Promoting Cooperation in Wireless Relay Networks Through Stackelberg Dynamic Scheduling.
- Author
-
Canzian, Luca, Badia, Leonardo, and Zorzi, Michele
- Subjects
WIRELESS communications ,RADIO relay systems ,MANAGEMENT information systems ,COMPUTER software ,ELECTRONIC data processing ,INFORMATION resources management - Abstract
This paper discusses a new perspective for the application of game theory to wireless relay networks, namely, how to employ it not only as an analytical evaluation instrument, but also in constructively deriving practical network management policies. We focus on the problem of medium sharing in wireless networks, which is often seen as a case where game theory just proves the inefficiency of distributed access, without proposing any remedy. Instead, we show how, by properly modeling the agents involved in such a scenario, and enabling simple but effective incentives towards cooperation for the users, we obtain a resource allocation scheme which is meaningful from both perspectives of game theory and network engineering. Such a result is achieved by introducing throughput redistribution as a way to transfer utilities, which enables cooperation among the users. Finally, a Stackelberg formulation is proposed, involving the network access point as a further player. Our approach is also able to take into account power consumption of the terminals, still without treating it as an insurmountable hurdle to cooperation, and at the same time to drive the network allocation towards an efficient cooperation level. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
17. AML: Efficient Approximate Membership Localization within a Web-Based Join Framework.
- Author
-
Li, Zhixu, Sitbon, Laurianne, Wang, Liwei, Zhou, Xiaofang, and Du, Xiaoyong
- Subjects
DATA extraction ,INTERNET searching ,ELECTRONIC data processing ,INFORMATION retrieval ,ELECTRONIC information resource searching ,COMPUTER programming ,COMPUTER software - Abstract
In this paper, we propose a new type of Dictionary-based Entity Recognition Problem, named Approximate Membership Localization (AML). The popular Approximate Membership Extraction (AME) provides a full coverage to the true matched substrings from a given document, but many redundancies cause a low efficiency of the AME process and deteriorate the performance of real-world applications using the extracted substrings. The AML problem targets at locating nonoverlapped substrings which is a better approximation to the true matched substrings without generating overlapped redundancies. In order to perform AML efficiently, we propose the optimized algorithm P-Prune that prunes a large part of overlapped redundant matched substrings before generating them. Our study using several real-word data sets demonstrates the efficiency of P-Prune over a baseline method. We also study the AML in application to a proposed web-based join framework scenario which is a search-based approach joining two tables using dictionary-based entity recognition from web documents. The results not only prove the advantage of AML over AME, but also demonstrate the effectiveness of our search-based approach. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
18. Demosaicking of Noisy Bayer-Sampled Color Images With Least-Squares Luma-Chroma Demultiplexing and Noise Level Estimation.
- Author
-
Jeon, Gwanggil and Dubois, Eric
- Subjects
COLOR filter arrays ,MULTIPLEXING ,DATA transmission systems ,ELECTRONIC data processing ,FILTERING software ,COMPUTER software ,IMAGE processing ,INFORMATION processing - Abstract
This paper adapts the least-squares luma-chroma demultiplexing (LSLCD) demosaicking method to noisy Bayer color filter array (CFA) images. A model is presented for the noise in white-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red, green, and blue color channels is then developed. Based on the estimated noise parameters, one of a finite set of configurations adapted to a particular level of noise is selected to demosaic the noisy data. The noise-adaptive demosaicking scheme is called LSLCD with noise estimation (LSLCD-NE). Experimental results demonstrate state-of-the-art performance over a wide range of noise levels, with low computational complexity. Many results with several algorithms, noise levels, and images are presented on our companion web site along with software to allow reproduction of our results. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
19. Selecting Spatiotemporal Patterns for Development of Parallel Applications.
- Author
-
Hoffmann, Henry, Agarwal, Anant, and Devadas, Srinivas
- Subjects
SPATIOTEMPORAL processes ,PARALLEL computers ,PARALLEL programming ,COMPUTER software ,DECISION trees ,SPATIAL data structures ,ELECTRONIC data processing ,COMPUTER security - Abstract
Design patterns for parallel computing attempt to make the field accessible to nonexperts by generalizing the common techniques experts use to develop parallel software. Existing parallel patterns have tremendous descriptive power, but it is often unclear to nonexperts how to choose a pattern based on the specific performance goals of a given application. This paper addresses the need for a pattern selection methodology by presenting four patterns and an accompanying decision framework for choosing from these patterns given an application's throughput and latency goals. The patterns are based on recognizing that one can partition an application's data or instructions and that these partitionings can be done in time or space, hence we refer to them as spatiotemporal partitioning strategies. This paper introduces a taxonomy that describes each of the resulting four partitioning strategies and presents a three-step methodology for selecting one or more given a throughput and latency goal. Several case studies are presented to illustrate the use of this methodology. These case studies cover several simple examples as well as more complicated applications including a radar processing application and an H.264 video encoder. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
20. Bidirectional Coupling Between 3-D Field Simulation and Immersive Visualization Systems.
- Author
-
Buendgens, Daniel, Hamacher, Andreas, Hafner, Martin, Kuhlen, Torsten, and Hameyer, Kay
- Subjects
SIMULATION methods & models ,ELECTROMAGNETISM ,DATA visualization ,ELECTRONIC data processing ,VIRTUAL reality ,RESEARCH teams ,COMPUTER software - Abstract
The interactive exploration of complex simulation data have spurred a renewed interest in visualization techniques, because of their ability to give an intuitively clue for the interpretation of electromagnetic phenomena. This paper presents a methodology for a bidirectional coupling of VTK-based visualization systems to interactive and immersive visualization systems which are specially adopted for the handing and processing of large and transient simulation data. In this work, the coupling is demonstrated by the flexible virtual reality (VR) software framework ViSTA which is used by many national and international research groups. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
21. Design, Modeling, and Test of a System for Atmospheric Electric Field Measurement.
- Author
-
Fort, Ada, Mugnaini, Marco, Vignoli, Valerio, Rocchi, Santina, Perini, Federico, Monari, Jader, Schiaffino, Marco, and Fiocchi, Franco
- Subjects
ATMOSPHERIC electricity ,ELECTRIC fields ,POLLUTION management ,ELECTRONIC data processing ,METROLOGY ,ENERGY consumption ,DETECTORS ,COMPUTER software ,ATMOSPHERIC models - Abstract
In this paper, a new version of the field-mill sensor structure for atmospheric electric field measurements is presented. Both the hardware components (i.e., the mechanical structure, the electronic front end, and the acquisition and control systems) and the data processing software are designed in order to reduce power consumption and enhance the instrument metrological performance in terms of accuracy, sensitivity, and frequency band. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
22. Write Activity Minimization for Nonvolatile Main Memory Via Scheduling and Recomputation.
- Author
-
Hu, Jingtong, Tseng, Wei-Che, Xue, Chun Jason, Zhuge, Qingfeng, Zhao, Yingchao, and Sha, Edwin H.-M.
- Subjects
COMPUTER storage device industry ,COMPUTER scheduling ,RANDOM access memory ,FLASH memory ,ELECTRONIC data processing ,COMPUTER software ,EXPERIMENTS - Abstract
Nonvolatile memories such as Flash memory, phase change memory (PCM), and magnetic random access memory (MRAM) have many desirable characteristics for embedded systems to employ them as main memory. However, there are two common challenges we need to answer before we can apply nonvolatile memory as main memory practically. First, nonvolatile memory has limited write/erase cycles compared to DRAM. Second, a write operation is slower than a read operation on nonvolatile memory. These two challenges can be answered by reducing the number of write activities on nonvolatile main memory. In this paper, we proposed two optimization techniques, write-aware scheduling and recomputation, to minimize write activities on nonvolatile memory. With the proposed techniques, we can both speed up the completion time of programs and extend nonvolatile memory's lifetime. The experimental results show that the proposed techniques can reduce the number of write activities on nonvolatile memory by 55.71% on average. Thus, the lifetime of nonvolatile memory is extended to 2.5 times as long as before on average. The completion time of programs can be reduced by 56.67% on systems with NOR Flash memory and by 47.63% on systems with NAND Flash memory on average. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
23. Efficient Fast 1-D 8 x 8 Inverse Integer Transform for VC-1 Application.
- Author
-
Chih-Peng Fan and Guo-An Su
- Subjects
COMPUTATIONAL complexity ,COMPUTER software ,VIDEO compression ,ELECTRONIC data processing ,COMPUTER system conversion ,COMPUTER algorithms ,SYSTEMS design - Abstract
In this paper, the one-dimensional (1-D) fast 8 x 8 inverse integer transform algorithm for Windows Media Video 9 (WMV-9/VC-1) is proposed. Based on the symmetric property of the integer transform matrix and the matrix operations, which denote the row/column permutations and the matrix decompositions, the efficient fast 1-D 8 x 8 inverse integer transform is developed. Therefore, the computational complexities of the proposed fast inverse transform are smaller than those of the direct method and the previous fast method. With low complexity, the proposed fast algorithm is suitable to accelerate the video coding computations. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
24. Distributional Features for Text Categorization.
- Author
-
Xiao-Bing Xue and Zhi-Hua Zhou
- Subjects
ELECTRONIC data processing ,MATHEMATICAL analysis ,COMPUTER algorithms ,COMPUTER software ,SYSTEMS development ,PROGRAMMING languages - Abstract
Text categorization is the task of assigning predefined categories to natural language text. With the widely used "bag-of-word" representation, previous researches usually assign a word with values that express whether this word appears in the document concerned or how frequently this word appears. Although these values are useful for text categorization, they have not fully expressed the abundant information contained in the document. This paper explores the effect of other types of values, which express the distribution of a word in the document. These novel values assigned to a word are called distributional features, which include the compactness of the appearances of the word and the, position of the first appearance of the word. The proposed distributional features are exploited by a tfidf style equation, and different features are combined using ensemble learning techniques. Experiments show that the distributional features are useful for text categorization. In contrast to using the traditional term frequency values solely, including the distributional features requires only a little additional cost, while the categorization performance can be significantly improved. Further analysis shows that the distributional features are especially useful when documents are long and the writing style is casual. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
25. One-Way Delay Measurement: State of the Art.
- Author
-
De Vito, Luca, Rapuano, Sergio, and Tomaciello, Laura
- Subjects
COMPUTER networks ,ELECTRONIC data processing ,ELECTRONIC systems ,GLOBAL Positioning System ,ARTIFICIAL satellites ,DIGITAL communications ,INTERNETWORKING devices ,NETWORK hubs ,COMPUTER software - Abstract
Nowadays, the evaluation of performance measurement in computer networks is an important issue. To ensure the quality of service of the network communication, one of the most important network performance parameters is the one-way delay (OWD). For accurate OWD estimation, it is essential to consider some parameters that can influence the measure, such as the operating system and, in particular, the threads, which are concurrent with the measurement application. Moreover, OWD estimation is not an easy task, because it can be affected by synchronization uncertainties. This paper aims to review the different solutions proposed in the scientific literature for OWD measurement. These solutions adopt different methods to guarantee a reasonable clock synchronization based on the Network Time Protocol, the Global Positioning System, and the IEEE 1588 Standard. These different approaches are critically reviewed, showing their advantages and disadvantages. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
26. Tracing Worm Break-In and Contaminations via Process Coloring: A Provenance-Preserving Approach.
- Author
-
Xuxian Jiang, Buchholz, Florian, Walters, Aaron, Dongyan Xu, Yi-min Wang, and Spafford, Eugene H.
- Subjects
COMPUTER worms ,COMPUTER viruses ,COMPUTER security ,DATA protection ,COMPUTER software ,ELECTRONIC data processing - Abstract
To detect and investigate self-propagating worm attacks against networked servers, the following capabilities are desirable: 1) raising timely alerts to trigger a worm investigation, 2) determining the break-in point of a worm, i.e., the vulnerable service from which the worm infiltrates the victim, and 3) identifying all contaminations inflicted by the worm during its residence in the victim. In this paper, we argue that the worm break-in provenance information has not been exploited in achieving these capabilities and thus propose process coloring, a new approach that preserves worm break-in provenance information and propagates it along operating- system-level information flows. More specifically, process coloring assigns a `color," a unique systemwide identifier, to each remotely accessible server process. The color will be either inherited by spawned child processes or diffused transitively through process actions. Process coloring achieves three new capabilities: color-based worm warning generation, break-in point identification, and log file partitioning. The virtualization-based implementation enables more tamper-resistant log collection, storage, and real-time monitoring. Beyond the overhead introduced by virtualization, process coloring only incurs very small additional system overhead. Experiments with real-world worms demonstrate the advantages of processing coloring over non-provenance-preserving tools. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
27. Teams, Computer Modeling, and Design.
- Author
-
Brennen, Shirley D., Strong, Richard J., Ryder, Christopher J., Blendell, Carol, and Molloy, Julie J.
- Subjects
COMPUTER simulation ,SIMULATION methods & models ,COMPUTER software ,OPERATIONAL definitions ,SYSTEMS design ,ELECTRONIC data processing ,SYSTEM analysis - Abstract
This paper presents selected findings from a three-year research project that was funded by the Human Sciences domain of the U.K. Ministry of Defence's scientific research program. A significant number of military systems are operated by teams of varying sizes, and there is a trend toward greater teamwork in the future, as technological advances enable enhanced cooperation between geographically distributed personnel. The need to be able to determine the most appropriate team structure for the most effective performance is becoming greater. The approach that is presented here has taken theoretical concepts from the team performance literature, developed them into an enhanced theoretical formulation, operationalized them, selected representative tradeoff criteria, and implemented them using a computer-modeling tool. The program that was undertaken was able to demonstrate that operationalizing team structure and team-performance-shaping factors in specific behavioral terms in this way has immense potential to generate quantitative output, allowing meaningful comparisons among design or operational alternatives. In addition, the discipline of the operationalization process provides a means for enriching theoretical concepts by grounding them in realistic behavioral terms, and this can lead to enhanced theorizing. Furthermore, once the initial data are collected and the model is built, modification is neither labor nor time intensive. The approach could be developed further to apply to many more team structure and performance concepts. We believe that this will enhance both the theory and the value of team-structure modeling for practical application in system design in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
28. Software-Based Failure Detection and Recovery in Programmable Network Interfaces.
- Author
-
Yizheng Zhou, Vijay Lakamraju, Israel Koren, and Krishna, C. Mani
- Subjects
COMPUTER networks ,COMPUTER interfaces ,COMPUTER software ,INFORMATION technology ,COMPUTER science ,COMPUTER input-output equipment ,USER interfaces ,COMPUTER programming ,ELECTRONIC data processing - Abstract
Emerging network technologies have complex network interfaces that have renewed concerns about network reliability. In this paper, we present an effective low-overhead fault tolerance technique to recover from network interface failures. Failure detection is based on a software watchdog timer that detects network processor hangs and a self-testing scheme that detects interface failures other than processor hangs. The proposed self-testing scheme achieves failure detection by periodically directing the control flow to go through only active software modules in order to detect errors that affect instructions in the local memory of the network interface. Our failure recovery is achieved by restoring the state of the network interface using a small backup copy containing just the right amount of information required for complete recovery. The paper shows how this technique can be made to minimize the performance impact to the host system and be completely transparent to the user. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
29. An Efficient Spatiotemporal Attention Model and Its Application to Shot Matching.
- Author
-
Shan Li and Moon-Chuen Lee
- Subjects
DIGITAL image processing ,IMAGE processing ,INFORMATION processing ,DIGITAL electronics ,ELECTRONIC data processing ,COMPUTATIONAL complexity ,MACHINE theory ,COMPUTER software ,DATABASES - Abstract
Within the framework of spatiotemporal attention detection, this paper proposes an efficient method for focus of attention (FOA) detection which involves combining adaptively the spatial and motion attention to form an overall attention map. Without computing motion explicitly, it detects motion attention using the rank deficiency of grayscale gradient tensors. We also propose an attention-driven shot matching method using primarily FOA. Experimental results demonstrate the effectiveness of the proposed FOA detection method and the proposed shot matching approach outperforms other published methods. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
30. A Temporal Texture Characterization Technique Using Block-Based Approximated Motion Measure.
- Author
-
Rahman, Ashfaqur and Murshed, Manzur
- Subjects
ELECTRONIC data processing ,COMPUTATIONAL complexity ,MACHINE theory ,COMPUTER software ,BLOCKING (Motion pictures) ,TEXTURE (Art) ,HYPERSPACE ,DATABASES ,ELECTRONIC claims processing - Abstract
Characterized by their distinctive motion patterns, temporal textures are natural phenomenon exhibiting spatio-temporal regularity with indeterminate spatial and temporal extent. This paper presents a real-time motion-based temporal texture characterization technique for the first time using block-based motion measures with very high classification accuracy against the popular opinion that such an accurate characterization is only possible using pixel-based measures. Finding an optimal weight ratio between space and time domain features where the accuracy of this block-based technique peaks has been the essence of this success. Computational complexity analyses and classification results clearly demonstrate the capability of the proposed technique in producing comprehensive classification results comparable to the best pixel-based technique with overwhelming reduction in computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Ternary State Circular Sequential k-out-of-n Congestion System.
- Author
-
Li Bai and Fan Zheng
- Subjects
ELECTRONIC data processing ,ELECTRONIC claims processing ,COMPUTER software ,COMPUTATIONAL complexity ,STATISTICAL correlation ,MACHINE theory ,PROBABILITY theory - Abstract
A ternary state circular sequential k-out-of-n congestion (TSCSknC) system is presented. The system is an extension of the circular sequential k-out-of-n congestion (CSknC) system which consists of two connection states: a) congestion (server busy), and b) successful. In contrast, a TSCSknC system considers three connection states: i) congestion, ii) break down, and iii) successful. It finds applications in some reliable systems to prevent single-point failures, such as the ones used in (k, n)secret key sharing systems. The system further assumes flu at each of the n servers has known connection probabilities in congestion, break-down, and successful states. These n servers are arranged in a circle, and are made with connection attempts sequentially round after round. If a server is not congested, the connection can be either successful, or a failure. Previously connected servers are blocked from reconnecting if they were in either states ii), or iii). Congested servers are attempted repeatedly until k servers are connected successfully, or (n - k + 1) servers have a break-down status. In other words, the system works when k servers are successfully connected, but fails when (n - k + 1) servers are in the break-down state. In this paper, we present the recursive, and marginal formulas for the system successful probability, the system failure probability, as well as the average stop length, i.e. number of connections needed to terminate the system to a successful or failure state, and its computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Embedded Software Optimization for AVS-P7 Decoder Real-time Implementation on RISC Core.
- Author
-
Baiying Lei, Wenguang Jin, Jiwei Hu, and Xiaodong Zhang
- Subjects
COMPUTER software ,COMPUTER systems ,ELECTRONIC systems ,REAL-time computing ,ELECTRONIC data processing ,CODING theory ,SIGNAL theory ,REDUCED instruction set computers ,COMPUTERS - Abstract
AVS-P7 is the recent mobile video coding standard of China. Currently, ARM cores are widely used in mobile applications because of their low power consumption. In this paper a scheme of the AVS-P7decoder real-time implementation on 32 bit MCU RISC processor ARM920T (S3C2440) is presented. The algorithm, redundancy, structure and memorp optimization methods to implement AVS-P7 real-time are discussed in detail. The experiment results demonstrate the success of our optimization techniques and the real-time implementation. The ADS, MCPS, PSNR and simulation results show that the proposed AVS-P7 decoder can decode the QVGA image sequence in real-time with high image quality and has low complexity and less memory requirement. AVS conformance test result confirms the proposed AVS-P7 decoder full compliance with AVS. The proposed AVS-P 7 decoder can be applied in many real-time applications like Mobile phone and IPTV in the third generation communication. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
33. Visualizing Design Patterns in Their Applications and Compositions.
- Author
-
Jing Dong, Sheng Yang, and Kang Zhang
- Subjects
DESIGN software ,GRAPHIC methods ,WORLD Wide Web ,PROGRAMMING languages software ,ELECTRONIC data processing ,INTERNET industry ,CHARTS, diagrams, etc. ,GRAPH theory ,COMPUTER software - Abstract
Design patterns are generic design solutions that can be applied and composed in different applications where pattern-related information is generally implicit in the Unified Modeling Language (UML) diagrams of the applications. It is unclear in which pattern instances each modeling element, such as class, attribute, and operation, participates. It is hard for a designer to find the design patterns used in an application design. Consequently, the benefits of design patterns are compromised because designers cannot communicate with each other in terms of the design patterns they used and their design decisions and trade-offs. In this paper, we present a UML profile that defines new stereotypes, tagged values, and constraints for tracing design patterns in UML diagrams. These new stereotypes and tagged values are attached to a modeling element to explicitly represent the role the modeling element plays in a design pattern so that the user can identify the pattern in a UML diagram. Based on this profile, we also develop a Web service (tool) for explicitly visualizing design patterns in UML diagrams. With this service, users are able to visualize design patterns in their applications and compositions because pattern-related information can be dynamically displayed. A real-world case study and a comparative experiment with existing approaches are conducted to evaluate our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
34. Quantum Existence Testing and Its Application for Finding Extreme Values in Unsorted Databases.
- Author
-
Imre, Sándor
- Subjects
ELECTRONIC data processing ,INFORMATION processing ,QUANTUM computers ,COMPUTATIONAL complexity ,PARALLEL processing ,COMPUTER interfaces ,COMPUTER software ,PROGRAM transformation ,MACHINE theory - Abstract
Many information processing and computing problems can be traced back to find the extreme value of a database or a function. Unfortunately, classical solutions suffer from high computational complexity if the database is unsorted or, equivalently, the function has many local minimum/maximum points. Proposed quantum computing-based solutions involve the repeated application of Grover's searching algorithm. In this paper, we introduce a new technique exploiting the parallel processing capabilities of quantum computing in a different way. We derive a special case of quantum counting—we call it quantum existence testing-which allows adapting the classical logarithmic search algorithm so that it is suitable for structured databases to unstructured ones. The paper analyzes the required number of database queries, the corresponding computational complexity, and the probability of error and their relationship. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
35. Hardware-Assisted Run-Time Monitoring for Secure Program Execution on Embedded Processors.
- Author
-
Arora, Divya, Ravi, Srivaths, Raghunathan, Anand, and Jha, Niraj K.
- Subjects
EMBEDDED computer systems ,HIGH performance computing ,COMPUTER software ,COMPUTER security ,DATA protection ,HIGH performance processors ,ELECTRONIC data processing - Abstract
Embedded system security is often compromised when ‘trusted’ software is subverted to result in unintended behavior, such as leakage of sensitive data or execution of malicious code. Several countermeasures have been proposed in the literature to counteract these intrusions. A common underlying theme in most of them is to define security policies at the system level in an application-independent manner and check for security violations either statically or at run time. In this paper, we present a methodology that addresses this issue from a different perspective. It defines correct execution as synonymous with the way the program was intended to run and employs a dedicated hardware monitor to detect and prevent unintended program behavior. Specifically, we extract properties of an embedded program through static program analysis and use them as the bases for enforcing permissible program behavior at run time. The processor architecture is augmented with a hardware monitor that observes the program's dynamic execution trace, checks whether it falls within the allowed program behavior, and flags any deviations from expected behavior to trigger appropriate response mechanisms. We present properties that capture permissible program behavior at different levels of granularity, namely inter-procedural control flow, intra-procedural control flow, and instruction-stream integrity. We outline a systematic methodology to design application-specific hardware monitors for any given embedded program. Hardware implementations using a commercial design flow, and cycle-accurate performance simulations indicate that the proposed technique can thwart several common software and physical attacks, facilitating secure program execution with minimal overheads. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
36. On Weight Design of Maximum Weighted Likelihood and an Extended EM Algorithm.
- Author
-
Zhenyue Zhang and Yiu-ming Cheung
- Subjects
COMPUTER algorithms ,SYSTEMS design ,MAXIMAL functions ,COMPUTER software ,COMPUTER programming ,MATHEMATICAL optimization ,ELECTRONIC data processing ,COMPUTER science ,OPERATIONS research - Abstract
The recent Maximum Weighted Likelihood (MWL) [18], [19] has provided a general learning paradigm for density-mixture model selection and learning, in which weight design, however, is a key issue. This paper will therefore explore such a design, and through which a heuristic extended Expectation-Maximization (X-EM) algorithm is presented accordingly. Unlike the EM algorithm [1], the X-EM algorithm is able to perform model selection by fading the redundant components out from a density mixture, meanwhile estimating the model parameters appropriately. The numerical simulations demonstrate the efficacy of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
37. On the Effectiveness of Secure Overlay Forwarding Systems under Intelligent Distributed DoS Attacks.
- Author
-
Xun Wang, Chellappan, Sriram, Boyer, Phillip, and Dong Xuan
- Subjects
COMPUTER systems ,CLIENT/SERVER computing ,INTERNET ,REAL-time programming ,REAL-time computing ,COMPUTER network architectures ,COMPUTER software ,ELECTRONIC data processing - Abstract
In the framework of a set of clients communicating with a critical server over the Internet, a recent approach to protect communication from Distributed Denial of Service (DDoS) attacks involves the usage of overlay systems. SOS, MAYDAY, and 13 are such systems. The architecture of these systems consists of a set of overlay nodes that serve as intermediate forwarders between the clients and the server, thereby controlling access to the server. Although such systems perform well under random DDoS attacks, it is questionable whether they are resilient to intelligent DDoS attacks which aim to infer architectures of the systems to launch more efficient attacks. In this paper, we define several intelligent DDoS attack models and develop analytical/simulation approaches to study the impacts of architectural design features of such overlay systems on the system performance in terms of path availability between clients and the server under attacks. Our data clearly demonstrate that the system performance is indeed sensitive to the architectural features and the different features interact with each other to impact overall system performance under intelligent DDoS attacks. Our observations provide important guidelines in the design of such secure overlay forwarding systems. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
38. Predictable Performance in SMT Processors: Synergy between the OS and SMTs.
- Author
-
Cazorla, Francisco J., Knijnenburg, Peter M. W., Sakeflariou, Rizos, Fernández, Enrique, Ramirez, Alex, and Valero, Mateo
- Subjects
SIMULTANEOUS multithreading processors ,PARALLEL processing ,COMPUTER operating systems ,TIME-sharing computer systems ,EMBEDDED computer systems ,THREADS (Computer programs) ,ELECTRONIC data processing ,COMPUTER programming ,MULTIPROCESSORS ,COMPUTER software - Abstract
Current Operating Systems (OS) perceive the different contexts of Simultaneous Multithreaded (SMT) processors as multiple independent processing units, although, in reality, threads executed in these units compete for the same hardware resources. Furthermore, hardware resources are assigned to threads implicitly as determined by the SMT instruction fetch (Ifetch) policy, without the control of the OS. Both factors cause a lack of control over how individual threads are executed, which can frustrate the work of the job scheduler. This presents a problem for general purpose systems, where the OS job scheduler cannot enforce priorities, and also for embedded systems, where it would be difficult to guarantee worst-case execution times. In this paper, we propose a novel strategy that enables a two-way interaction between the OS and the SMT processor and allows the OS to run jobs at a certain percentage of their maximum speed, regardless of the workload in which these jobs are executed. In contrast to previous approaches, our approach enables the OS to run time-critical jobs without dedicating all internal resources to them so that non-time-critical jobs can make significant progress as well and without significantly compromising overall throughput. In fact, our mechanism, in addition to fulfilling OS requirements, achieves 90 precent of the throughput of one of the best currently known fetch policies for SMTs. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
39. OMI Level 0 to 1b Processing and Operational Aspects.
- Author
-
van den Oord, G. H. J., Rozemeijer, Nico C., Schenkelaars, V., Levelt, Pieternel F., Dobber, Marcel R., Voors, Robert H. M., Claas, J., de Vries, Johan, ter Linden, M., de Haan, C., and van de Berg2, T.
- Subjects
REMOTE sensing ,ARTIFICIAL satellites ,ELECTRONIC data processing ,SPECTROGRAPHS ,COMPUTER software - Abstract
The Ozone Monitoring Instrument (OMI) was launched on July 15, 2004 on the National Aeronautics and Space Administration's Earth Observing System Aura satellite. OMI is an ultraviolet-visible imaging spectrograph providing daily global coverage with high spatial resolution. This paper discusses the ground data processing software used for Level 0 to Level lb processing of OMI data. In addition, the OMI operations scenario is described together with the data processing concept. This paper is intended to serve as a reference guide for users of OMI (Level 1b) data. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
40. Performance Analysis of the FastICA Algorithm and Cramér—Rao Bounds for Linear Independent Component Analysis.
- Author
-
Tichavský, Petr, Koldovský, Zbynĕk, and Oja, Erkki
- Subjects
ALGORITHMS ,LINEAR dependence (Mathematics) ,COMPUTER software ,COMPUTATIONAL complexity ,ELECTRONIC data processing - Abstract
The FastICA or fixed-point algorithm is one of the most successful algorithms for linear independent component analysis (ICA) in terms of accuracy and computational complexity. Two versions of the algorithm are available in literature and software: a one-unit (deflation) algorithm and a symmetric algorithm. The main result of this paper are analytic closed-form expressions that characterize the separating ability of both versions of the algorithm in a local sense, assuming a ‘good’ initialization of the algorithms and long data records. Based on the analysis, it is possible to combine the advantages of the symmetric and one-unit version algorithms and predict their performance. To validate the analysis, a simple check of saddle points of the cost function is proposed that allows to find a global minimum of the cost function in almost 100% simulation runs. Second, the Cramér-Rao lower bound for linear ICA is derived as an algorithm independent limit of the achievable separation quality. The FastICA algorithm is shown to approach this limit in certain scenarios. Extensive computer simulations supporting the theoretical findings are included. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
41. Input Space-Adaptive Optimization for Embedded-Software Synthesis.
- Author
-
Weidong Wang, Raghunathan, Anand, Lakshminarayana, Ganesh, and Jha, Niraj K.
- Subjects
EMBEDDED computer systems ,ALGORITHMS ,MATHEMATICAL optimization ,ELECTRONIC data processing ,ELECTRONIC systems ,COMPUTER software - Abstract
This paper presents a technique for exploiting input statistics for energy and performance optimization of embedded software. The proposed technique is based on the fact that the computational complexities of programs or subprograms are often highly dependent on the values assumed by input and intermediate program variables during execution. This observation is exploited in the proposed software synthesis technique by augmenting the program with optimized versions of one or more subprograms that are specialized to, and executed under, specific input subspaces. We propose a methodology for input space-adaptive software synthesis that consists of the following steps: 1) control and value profiling of the input program; 2) application of compiler transformations in a preprocessing step; 3) identification of subprograms and corresponding input subspaces that hold the highest potential for optimization; and 4) an iterative application of known compiler transformations to realize performance and energy savings. We propose novel metrics based on the entropies of program I variables to characterize subprograms and input subspaces that hold significant potential for optimization. The chosen subprograms are optimized by translating the input subspaces into value I constraints on their variables, and iteratively applying known compiler transformations (that were not applicable in the context of the original program). We have evaluated input space-adaptive software synthesis by compiling the resulting optimized programs to two commercial embedded systems: an embedded system based on the Fujitsu SPARClite processor, and the Compaq iPAQ personal digital assistant (PDA) [64 MB memory, 206 MHz Intel StrongARM central processing unit (CPU)]. The energy and execution-time savings were calculated using energy-aware instruction-level simulators, as well as through direct-current measurement on the iPAQ. Our results demonstrate that the proposed technique can reduce energy by up to 54.5% (average of 30.6% and 25.6% for the SPARCIite-based system and the iPAQ, respectively) while simultaneously improving performance by up to 59.6% (average of 31.3% and 31.5% for the SPARCIite-based I system and the iPAQ, respectively). In effect, improvements in the energy-delay product of up to 81.1% (average of 51.0% and 47.7 % for the SPARClite-based system and the iPAQ, respectively) were observed. The energy savings resulting from our technique are fairly processor independent, and complementary to conventional compiler optimizations. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
42. A Computationally Efficient Engine for Flexible Intrusion Detection.
- Author
-
Zachary K. Baker and Viktor K. Prasanna
- Subjects
COMPUTER network security ,COMPUTER architecture ,COMPUTER security ,COMPUTER networks ,ALGORITHMS ,DATA protection ,ELECTRONIC data processing ,COMPUTER software ,COMPUTER systems - Abstract
Pattern matching for network security and intrusion detection demands exceptionally high performance. This paper describes a novel systolic array-based string matching architecture using a buffered, two-comparator variation of the Knuth-Morris-Pratt (KMP) algorithm. The architecture compares favorably with the state-of-the-art hardwired designs while providing on-the-fly reconfiguration, efficient hardware utilization, and high clock rates. KMP is a well-known computationally efficient string-matching technique that uses a single comparator and a precomputed transition table. Through the use of the transition table, the number of redundant comparisons performed is reduced. Through various algorithmic changes, we enable KMP to be used in hardware, providing the computational efficiency of the serial algorithm and the high throughput of a parallel hardware architecture. The efficiency of the system allows for a faster and denser implementation than any other RAM-based exact match system. We add a second comparator and an input buffer and then prove that the modified algorithm can function efficiently implemented as an element of a systolic array. The system can accept at least one character in each cycle while guaranteeing that the stream will never stall. In this paper, we prove the bound on the buffer size and running time of the systolic array, discuss the architectural considerations involved in the FPGA implementation, and provide performance comparisons against other approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
43. An Algorithm for Trading Off Quantization Error with Hardware Resources for MATLAB-Based FPGA Design.
- Author
-
Roy, Sanghamitra and Banerjee, Prith
- Subjects
DIGITAL signal processing ,ALGORITHMS ,DIGITAL communications ,SIGNAL processing ,COMPUTER software ,ELECTRONIC data processing - Abstract
Most practical FPGA designs of digital signal processing (DSP) applications are limited to fixed-point arithmetic owing to the cost and complexity of floating-point hardware. While mapping DSP applications onto FPGAs, a DSP algorithm designer must determine the dynamic range and desired precision of input, intermediate, and output signals in a design implementation. The first step in a MATLAB-based hardware design flow is the conversion of the floating-point MATLAB code into a fixed-point version using "quantizers" from the Filter Design and Analysis (FDA) Toolbox for MATLAB. This paper describes an approach to automate the conversion of floating-point MATLAB programs into fixed-point MATLAB programs, for mapping to FPGAs by profiling the expected inputs to estimate errors. Our algorithm attempts to minimize the hardware resources while constraining the quantization error within a specified limit. Experimental results on five MATLAB benchmarks are reported for Xilinx Virtex II FPGAs. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
44. Optimal Power Flow With Complementarity Constraints.
- Author
-
Rosehart, William, Roman, Codruta, and Schellenberg, Antony
- Subjects
PROGRAMMING languages ,ELECTRONIC data processing ,ARTIFICIAL languages ,COMPUTER programmers ,COMPUTER software ,ELECTRIC power ,POWER resources - Abstract
This paper proposes a mathematical program with complementarity constraints to better model the relationship between the base, or current, operating point and the maximum loading point in a power system when solving maximum loading problems. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. Multiarea State Estimation Using Synchronized Phasor Measurements.
- Author
-
Liang Zhao and Ali Abur
- Subjects
ELECTRONIC data processing ,BATCH processing ,COMPUTER software ,MOTOR vehicles ,CONTROL theory (Engineering) ,SYSTEM analysis ,MACHINE theory - Abstract
This paper investigates the problem of state estimation in very large power systems, which may contain several control areas. An estimation approach which coordinates locally obtained decentralized estimates while improving bad data processing capability at the area boundaries is presented. Each area is held responsible for maintaining a sufficiently redundant measurement set to allow bad data processing among its internal measurements. It is assumed that synchronized phasor measurements from different area buses are available in addition to the conventional measurements provided by the substation remote terminal units. The estimator is implemented and tested using different measurement configurations for the IEEE 118-bus test system and the 4520-bus ERCOT system. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
46. Pixel Clustering by Adaptive Pixel Moving and Chaotic Synchronization.
- Author
-
Zhao, Liang, de Carvalho, Andre C.P.L.F., and Zhaohui Li
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER software ,ELECTRONIC data processing ,DIGITAL computer simulation ,MACHINE theory - Abstract
In this paper, a network of coupled chaotic maps for pixel clustering is proposed. Time evolutions of chaotic maps iii the network corresponding to a pixel cluster are synchronized with each other. Those synchronized trajectories are desynchroflized with respect to the time evolutions of chaotic maps corresponding to other pixel clusters in the same image. A pixel motion mechanism is also introduced, which makes each group of pixels more compact and, consequently, makes the model robust enough to classify ambiguous pixels. Another feature of the proposed model is that the number of pixel clusters does not need to be previously known. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
47. Tolerating Late Memory Traps in Dynamically Scheduled Processors.
- Author
-
Qiu, Xiaogang and Dubois, Michel
- Subjects
COMPUTER storage devices ,MEMORY ,RESEARCH ,COMPUTER software ,ELECTRONIC data processing ,CACHE memory - Abstract
In the past few years, exception support for memory functions such as virtual memory, informing memory operations, software assist for shared memory protocols, or interactions with processors in memory has been advocated in various research papers. These memory traps may occur on a miss in the cache hierarchy or on a local or remote memory access. However, contemporary, dynamically scheduled processors only support memory exceptions detected in the TLB associated with the first-level cache. They do not support memory exceptions taken deep in the memory hierarchy. In this case, memory traps may be late, in the sense that the exception condition may still be undecided when a long-latency memory instruction reaches the retirement stage. In this paper we evaluate through simulation the overhead of memory traps in dynamically scheduled processors, focusing on the added overhead incurred when a memory trap is late. We also propose some simple mechanisms to reduce this added overhead while preserving the memory consistency model. With more aggressive memory access mechanisms in the processor we observe that the overhead of all memory traps--either early or late--is increased while the lateness of a trap becomes largely tolerated so that the performance gap between early and late memory traps is greatly reduced. Additionally, because of caching effects in the memory hierarchy, the frequency of memory traps usually decreases as they are taken deeper in the memory hierarchy and their overall impact on execution times becomes negligible. We conclude that support for memory traps taken throughout the memory hierarchy could be added to dynamically scheduled processors at low hardware cost and little performance degradation. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
48. Behavior Protocols for Software Components.
- Author
-
Plasil, Frantisek and Visnovsky, Stanislav
- Subjects
COMPUTER programming ,COMPUTER software ,COMPUTER network protocols ,ELECTRONIC data processing ,ARTIFICIAL intelligence ,COMPUTER architecture - Abstract
In this paper, we propose a means to enhance an architecture description language with a description of component behavior. A notation-used for this purpose should be able to express the "interplay" on the component's interfaces and reflect step-bystep refinement of the component's specification during its design. In addition, the notation should be easy to comprehend and allow for formal reasoning about the correctness of the specification refinement and also about the correctness of an implementation in terms of whether it adheres to the specification. Targeting all these requirements together, the paper proposes employing behavior protocols which are based on a notation similar to regular expressions. As proof of the concept, the behavior protocols are used in the SOFA architecture description language at three levels: interface, frame, and architecture. Key achievements of this paper include the definitions of bounded component behavior and protocol conformance relation. Using these concepts, the designer can verify the adherence of a component's implementation to its specification at runtime, while the correctness of refining the specification can be verified at design time. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
49. Reusability of Mathematical Software: A Contribution.
- Author
-
Di Felice, Paolino
- Subjects
COMPUTER software reusability ,SOFTWARE engineering ,PROGRAMMING languages ,ELECTRONIC data processing ,COMPUTER programming ,COMPUTER software - Abstract
Mathematical software is devoted to solve problems involving matrix computation and manipulation. The main problem limiting the reusability of existing mathematical software is the following: programs are often not initially designed for being reused. Therefore, it is hard to find programs that can be easily reused. In the first part of this paper, we give a programming methodology useful for designing and implementing reusable code. We name unit a portion of code designed and implemented for being reused. Our units are self-contained software components featuring a high degree of information hiding. This way of organizing software facilitates the reuse process and, furthermore, improves the understandability of units. To speed up the implementation process, a system supporting the reusability of units from an existing software library is particularly useful. In the second part of this paper, we report about an easy to use system of this kind. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
50. An Analysis of Test Data Selection Criteria Using the RELAY Model of Fault Detection.
- Author
-
Richardson, Debra J. and Thompson, Margaret C.
- Subjects
FAULT-tolerant computing ,ELECTRONIC data processing ,COMPUTER software ,SOFTWARE engineering ,COMPUTER systems ,ELECTRONIC systems - Abstract
RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. In this paper, we analyze three test data selection criteria that attempt to detect faults in six fault classes. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules; each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules; although a criterion may cause a subexpression to take on an erroneous value, there is no effort made to guarantee that the intermediate values cause observable, erroneous behavior. This paper shows how the RELAY model overcomes these weaknesses. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.