173 results on '"Distributed processing (Computers) -- Research"'
Search Results
2. Nomadic Pict: programming languages, communication infrastructure overlays, and semantics for mobile computation
- Author
-
Sewell, Peter, Wojciechowski, Pawel T., and Unyapoth, Asis
- Subjects
Algorithm ,Programming language ,Distributed processing (Computers) ,Algorithms -- Research ,Programming languages -- Research ,Distributed processing (Computers) -- Research - Abstract
Mobile computation, in which executing computations can move from one physical computing device to another, is a recurring theme: from OS process migration, to language-level mobility, to virtual machine migration. This article reports on the design, implementation, and verification of overlay networks to support reliable communication between migrating computations, in the Nomadic Pict project. We define two levels of abstraction as calculi with precise semantics: a low-level Nomadic [pi] calculus with migration and location-dependent communication, and a high-level calculus that adds location-independent communication. Implementations of location-independent communication, as overlay networks that track migrations and forward messages, can be expressed as translations of the high-level calculus into the low. We discuss the design space of such overlay network algorithms and define three precisely, as such translations. Based on the calculi, we design and implement the Nomadic Pict distributed programming language, to let such algorithms (and simple applications above them) to be quickly prototyped. We go on to develop the semantic theory of the Nomadic [pi] calculi, proving correctness of one example overlay network. This requires novel equivalences and congruence results that take migration into account, and reasoning principles for agents that are temporarily immobile (e.g., waiting on a lock elsewhere in the system). The whole stands as a demonstration of the use of principled semantics to address challenging system design problems. Categories and Subject Descriptors: C.2.2 ]Computer-Communication Networks]: Network Protocols; C.2.4 [Computer-Communication Networks]: Distributed Systems; D.3.3 [Programming Languages]: Language Constructs and Features; F.3.1 [Logics and Meaning of Programs]: Specifying and Verifying and Reasoning about Programs; F.3.2 [Logics and Meaning of Programs]: Semantics of Programming Languages General Terms: Algorithms, Design, Languages, Theory, Verification ACM Reference Format: Sewell, P., Wojciechowski, P. T., and Unyapoth, A. 2010. Nomadic Pict: Programming languages, communication infrastructure overlays, and semantics for mobile computation. ACM Trans. Program. Lang. Syst. 32, 4, Article 12 (April 2010), 63 pages. DOI = 10.1145/1734206.1734209 http://doi.acm.org/10.1145/1734206.1734209
- Published
- 2010
3. Why critical systems need help to evolve
- Author
-
Cohen, Bernard and Boxer, Philip
- Subjects
Distributed processing (Computers) ,Orthopedics -- Research ,Cloud computing -- Research ,Distributed processing (Computers) -- Research ,Systems engineering - Published
- 2010
4. Tight bounds for clock synchronization
- Author
-
Lenzen, Christoph, Locher, Thomas, and Wattenhofer, Roger
- Subjects
Distributed processing (Computers) ,Functions of bounded variation -- Usage ,Clock cycles (Computers) -- Analysis ,Clock cycles (Computers) -- Properties ,Distributed processing (Computers) -- Research - Published
- 2010
5. Predicting and preventing inconsistencies in deployed distributed systems
- Author
-
Yabandeh, Maysam, Knezevic, Nikola, Kostic, Dejan, and Kuncak, Viktor
- Subjects
Algorithm ,Distributed processing (Computers) ,Algorithms -- Research ,Distributed processing (Computers) -- Research ,Reliability (Engineering) -- Research ,Information systems -- Research - Abstract
We propose a new approach for developing and deploying distributed systems, in which nodes predict distributed consequences of their actions and use this information to detect and avoid errors. Each node continuously runs a state exploration algorithm on a recent consistent snapshot of its neighborhood and predicts possible future violations of specified safety properties. We describe a new state exploration algorithm, consequence prediction, which explores causally related chains of events that lead to property violation. This article describes the design and implementation of this approach, termed CrystalBall. We evaluate CrystalBall on RandTree, BulletPrime, Paxos, and Chord distributed system implementations. We identified new bugs in mature Mace implementations of three systems. Furthermore, we show that if the bug is not corrected during system development, CrystalBall is effective in steering the execution away from inconsistent states at runtime. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems; H.4.3 [Information Systems Applications]: Communications Applications General Terms: Experimentation, Reliability Additional Key Words and Phrases: Distributed systems, consequence prediction, reliability, execution steering, enforcing safety properties ACM Reference Format: Yabandeh, M., Knezevic, N., Kostic, D., and Kuncak, V. 2010. Predicting and preventing inconsistencies in deployed distributed systems. ACM Trans. Comput. Syst. 28, 1, Article 2 (March 2010), 49 pages. DOI = 10.1145/1731060.1731062 http://doi.acm.org/10.1145/1731060.1731062
- Published
- 2010
6. Automatic deployment of distributed teams of robots from temporal logic motion specifications
- Author
-
Kloetzer, Marius and Belta, Calin
- Subjects
Robot ,Distributed processing (Computers) ,Robots -- Usage ,Distributed processing (Computers) -- Research ,Robots -- Control systems ,Robots -- Research - Published
- 2010
7. Rate and power allocation under the pairwise distributed source coding constraint
- Author
-
Li, Shizheng and Ramamoorthy, Aditya
- Subjects
Distributed processing (Computers) ,Resource allocation -- Research ,Distributed processing (Computers) -- Research ,Encoders -- Research - Published
- 2009
8. Lattices for distributed source coding: jointly Gaussian sources and reconstruction of a linear function
- Author
-
Krithivasan, Dinesh and Pradhan, S. Sandeep
- Subjects
Distributed processing (Computers) ,Source code -- Research ,Lattice theory -- Research ,Distributed processing (Computers) -- Research - Published
- 2009
9. Distributed MIMO systems for nomadic applications over a symmetric interference channel
- Author
-
Simeone, Osvaldo, Somekh, Oren, Poor, H. Vincent, and Shamai, Shlomo
- Subjects
Distributed processing (Computers) ,MIMO communications -- Research ,Electromagnetic interference -- Research ,Radio relay systems -- Research ,Distributed processing (Computers) -- Research - Published
- 2009
10. Mitigating denial-of-service attacks on the chord overlay network: a location hiding approach
- Author
-
Srivatsa, Mudhakar and Ling Liu
- Subjects
Denial of service attacks -- Control ,Distributed processing (Computers) -- Research ,Overlays and overlaying -- Analysis ,Scalability -- Analysis ,Distributed processing (Computers) ,Business ,Computers ,Electronics ,Electronics and electrical industries - Published
- 2009
11. Distributed hash sketches: scalable, efficient, and accurate cardinality estimation for distributed multisets
- Author
-
Ntarmos, N., Triantafillou, P., and Weikum, G.
- Subjects
Distributed processing (Computers) ,Distributed processing (Computers) -- Research ,Hashing functions -- Research - Abstract
Counting items in a distributed system, and estimating the cardinality of multisets in particular, is important for a large variety of applications and a fundamental building block for emerging Internet-scale information systems. Examples of such applications range from optimizing query access plans in peer-to-peer data sharing, to computing the significance (rank/score) of data items in distributed information retrieval. The general formal problem addressed in this article is computing the network-wide distinct number of items with some property (e.g., distinct files with file name containing "spiderman") where each node in the network holds an arbitrary subset, possibly overlapping the subsets of other nodes. The key requirements that a viable approach must satisfy are: (1) scalability towards very large network size, (2) efficiency regarding messaging overhead, (3) load balance of storage and access, (4) accuracy of the cardinality estimation, and (5) simplicity and easy integration in applications. This article contributes the DHS (Distributed Hash Sketches) method for this problem setting: a distributed, scalable, efficient, and accurate multiset cardinality estimator. DHS is based on hash sketches for probabilistic counting, but distributes the bits of each counter across network nodes in a judicious manner based on principles of Distributed Hash Tables, paying careful attention to fast access and aggregation as well as update costs. The article discusses various design choices, exhibiting tunable trade-offs between estimation accuracy, hop-count efficiency, and load distribution fairness. We further contribute a full-fledged, publicly available, open-source implementation of all our methods, and a comprehensive experimental evaluation for various settings. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems--Distributed applications; H.3.4 [Information Storage and Retrieval]: Systems and Software--Distributed systems General Terms: Algorithms, Design, Experimentation, Performance Additional Key Words and Phrases: Distributed estimation, distributed information systems, distributed cardinality estimation, distributed data summary structures, hash sketches, peer-to-peer networks and systems
- Published
- 2009
12. Evaluation of control message overhead of a DHT-based P2P system
- Author
-
Hong, Se Gi, Hilt, Volker, and Schulzrinne, Henning
- Subjects
Peer to peer computing -- Research ,Distributed processing (Computers) -- Research ,Bandwidth -- Research ,Distributed processing (Computers) ,Bandwidth allocation ,Bandwidth technology ,Science and technology ,Telecommunications industry - Abstract
The control message overhead created by the distributed hash table (DHT)-based peer-to-peer Chord protocol is evaluated under different churn rates and for different networking sizes. The DHT-based P2P structure can be stabilized by adjusting the update period. Message overhead for the Chord architecture is moderate under high churn rates and Internet-wide networking.
- Published
- 2008
13. Distributed LQR design for identical dynamically decoupled systems
- Author
-
Borrelli, Francesco and Keviczky, Tamas
- Subjects
Distributed processing (Computers) ,Control systems -- Design and construction ,Distributed processing (Computers) -- Research ,Dynamical systems -- Design and construction ,Dynamical systems -- Control - Abstract
We consider a set of identical decoupled dynamical systems and a control problem where the performance index couples the behavior of the systems. The coupling is described through a communication graph where each system is a node and the control action at each node is only function of its state and the states of its neighbors. A distributed control design method is presented which requires the solution of a single linear quadratic regulator (LQR) problem. The size of the LQR problem is equal to the maximum vertex degree of the communication graph plus one. The design procedure proposed in this paper illustrates how stability of the large-scale system is related to the robustness of local controllers and the spectrum of a matrix representing the desired sparsity pattern of the distributed controller design problem. Index Terms--Distributed control, linear quadratic regulator (LQR), network control systems, optimal control.
- Published
- 2008
14. Optimal adaptive control--contradiction in terms or a matter of choosing the right cost functional?
- Author
-
Krstic, Miroslav
- Subjects
Distributed processing (Computers) ,Adaptive control -- Methods ,Distributed processing (Computers) -- Research ,Parameter estimation -- Methods - Abstract
Approaching the problem of optimal adaptive control as "optimal control made adaptive" namely, as a certainty equivalence combination of linear quadratic optimal control and standard parameter estimation, fails on two counts: numerical (as it requires a solution to a Riccati equation at each time step) and conceptual (as the combination actually does not possess any optimality property). In this note, we present a particular form of optimality achievable in Lyapunov-based adaptive control. State and control are subject to positive definite penalties, whereas the parameter estimation error is penalized through an exponential of its square, which means that no attempt is made to enforce the parameter convergence, but the estimation transients are penalized simultaneously with the state and control transients. The form of optimality we reveal here is different from our work in [Z. H. Li and M. Krstic, "Optimal design of adaptive tracking controllers for nonlinear systems," Automatica, vol. 33, pp. 1459-1473, 1997] where only the terminal value of the parameter error was penalized. We present our optimality concept on a partial differential equation (PDE) example--boundary control of a particular parabolic PDE with an unknown reaction coefficient. Two technical ideas are central to the developments in the note: a nonquadratic Lyapunov function and a normalization in the Lyapunov-based update law. The optimal adaptive control problem is fundamentally nonlinear and we explore this aspect through several examples that highlight the interplay between the non-quadratic cost and value functions. Index Terms--Adaptive control, backstepping, boundary control, distributed parameter systems.
- Published
- 2008
15. Finite-time semistability and consensus for nonlinear dynamical networks
- Author
-
Hui, Qing, Haddad, Wassim M., and Bhat, Sanjay P.
- Subjects
Distributed processing (Computers) ,Distributed processing (Computers) -- Research ,Finite element method -- Usage ,Dynamical systems -- Design and construction ,Dynamical systems -- Control - Abstract
This paper focuses on semistability and finite-time stability analysis and synthesis of systems having a continuum of equilibria. Semistability is the property whereby the solutions of a dynamical system converge to Lyapunov stable equilibrium points determined by the system initial conditions. In this paper, we merge the theories of semistability and finite-time stability to develop a rigorous framework for finite-time semistability. In particular, finite-time semistability for a continuum of equilibria of continuous autonomous systems is established. Continuity of the settling-time function as well as Lyapunov and converse Lyapunov theorems for semistability are also developed. In addition, necessary and sufficient conditions for finite-time semistability of homogeneous systems are addressed by exploiting the fact that a homogeneous system is finite-time semistable if and only if it is semistable and has a negative degree of homogeneity. Unlike previous work on homogeneous systems, our results involve homogeneity with respect to semistable dynamics, and require us to adopt a geometric description of homogeneity. Finally, we use these results to develop a general framework for designing semistable protocols in dynamical networks for achieving coordination tasks in finite time. Index Terms--Consensus protocols, distributed control, finite-time stability, homogeneity, multiagent systems, semistability, state equipartition, thermodynamic networks.
- Published
- 2008
16. On sustained QoS guarantees in operated IEEE 802.11 wireless LANs
- Author
-
Nafaa, Abdelhamid and Ksentini, Adlen
- Subjects
Distributed processing (Computers) -- Research ,Local area networks -- Design and construction ,Distributed processing (Computers) ,LAN ,Pre-packaged LAN ,Virtual LAN ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A novel cross-layer Media Access Control (MAC) design featuring a delay-sensitive backoff range adaptation with a distributed flow admission control is presented. Findings reveal the consistent performance of the proposed protocol in terms of network utilization, bounded delays and service-level fairness.
- Published
- 2008
17. Adaptive boundary control for unstable parabolic PDEs--part I: Lyapunov design
- Author
-
Krstic, Miroslav and Smyshlyaev, Andrey
- Subjects
Distributed processing (Computers) ,Adaptive control -- Methods ,Liapunov functions -- Evaluation ,Distributed processing (Computers) -- Research ,Differential equations, Partial -- Evaluation ,Control systems -- Design and construction - Abstract
We develop adaptive controllers for parabolic partial differential equations (PDEs) controlled from a boundary and containing unknown destabilizing parameters affecting the interior of the domain. These are the first adaptive controllers for unstable PDEs without relative degree limitations, open-loop stability assumptions, or domain-wide actuation. It is the first necessary step towards developing adaptive controllers for physical systems such as fluid, thermal, and chemical dynamics, where actuation can be only applied non-intrusively, the dynamics are unstable, and the parameters, such as the Reynolds, Rayleigh, Prandtl, or Peclet numbers are unknown because they vary with operating conditions. Our method builds upon our explicitly parametrized control formulae in [27] to avoid solving Riccati or Bezout equations at each time step. Most of the designs we present are state feedback but we present two benchmark designs with output feedback which have infinite relative degree. Index Terms--Adaptive control, backstepping, boundary control, distributed parameter systems.
- Published
- 2008
18. Enforcing consensus while monitoring the environment in wireless sensor networks
- Author
-
Braca, Paolo, Marano, Stefano, and Matta, Vincenzo
- Subjects
Distributed processing (Computers) -- Research ,Signal processing -- Research ,Mobile communication systems -- Analysis ,Wireless communication systems -- Analysis ,Distributed processing (Computers) ,Digital signal processor ,Wireless technology ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
The behavior of a wireless sensor network (WSN) that continuously senses the surrounding environment with simultaneous enforcement of consensus among its nodes is discussed.
- Published
- 2008
19. A novel algorithm for mining association rules in wireless Ad Hoc Sensor Networks
- Author
-
Boukerche, Azzedine and Samarah, Samer
- Subjects
Distributed processing (Computers) -- Research ,Network architecture -- Research ,Mobile communication systems -- Analysis ,Wireless communication systems -- Analysis ,Distributed processing (Computers) ,Network architecture ,Wireless technology ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A comprehensive framework for mining wireless Ad Hoc Sensor Networks (WASNs) is proposed. Findings reveal the efficiency of the proposed framework.
- Published
- 2008
20. Improving the performances of distributed coordinated scheduling in IEEE 802.16 mesh networks
- Author
-
Wang, Shie-Yuan, Lin, Chih-Che, Chu, Han-Wei, Hsu, Teng-Wei, and Fang, Ku-Han
- Subjects
Mesh networks -- Design and construction ,Wi-Max -- Equipment and supplies ,Distributed processing (Computers) -- Research ,Access control (Computers) -- Standards ,Distributed processing (Computers) ,Network access ,Business ,Electronics ,Electronics and electrical industries ,Transportation industry - Abstract
The IEEE 802.16 mesh network is a promising next-generation wireless backbone network. In such a network, setting the holdoff time for nodes is essential to achieving good performances of medium-access-control-layer scheduling. In this paper, we propose a two-phase holdoff time setting scheme to improve network utilization. Both static and dynamic approaches of this scheme are proposed, and their performances are compared against those of the original schemes. Our simulation results show that both approaches significantly increase the utilization of the control-plane bandwidth and decrease the time required to establish data schedules. In addition, both approaches provide efficient and fair scheduling for IEEE 802.16 mesh networks and generate good application performances. Index Terms--Distributed scheduling, IEEE 802.16(d), mesh network.
- Published
- 2008
21. Adaptive gain control for spike-based map communication in a neuromorphic vision system
- Author
-
Meng, Yicong and Shi, Bertram E.
- Subjects
Neural networks -- Design and construction ,Adaptive control -- Methods ,Distributed processing (Computers) -- Research ,Machine vision -- Control ,Neural network ,Distributed processing (Computers) ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
To support large numbers of model neurons, neuromorphic vision systems are increasingly adopting a distributed architecture, where different arrays of neurons are located on different chips or processors. Spike-based protocols are used to communicate activity between processors. The spike activity in the arrays depends on the input statistics as well as internal parameters such as time constants and gains. In this paper, we investigate strategies for automatically adapting these parameters to maintain a constant firing rate in response to changes in the input statistics. We find that under the constraint of maintaining a fixed firing rate, a strategy based upon updating the gain alone performs as well as an optimal strategy where both the gain and the time constant are allowed to vary. We discuss how to choose the time constant and propose an adaptive gain control mechanism whose operation is robust to changes in the input statistics. Our experimental results on a mobile robotic platform validate the analysis and efficacy of the proposed strategy. Index Terms--Adaptive systems, distributed computing, neuromorphic systems, spiking neural networks.
- Published
- 2008
22. On the design of distributed object placement and load balancing strategies in large-scale networked multimedia storage systems
- Author
-
Zeng, Zeng and Veeravalli, Bharadwaj
- Subjects
Computer storage devices -- Design and construction ,Multimedia technology -- Equipment and supplies ,Distributed processing (Computers) -- Research ,Queuing theory -- Research ,Data storage device ,Multimedia technology ,Distributed processing (Computers) ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
In a large-scale multimedia storage system (LMSS) where client requests for dFfferent multimedia objects may have different demands, the placement and replication of the objects is an important factor, as it may result in an imbalance in server loading across the system. Since replication management and load balancing are all the more crucial issues in multimedia systems, in the literature, these problems are handled by centralized servers. Each object storage server (OSS) responds to the requests that come from the centralized servers independently and has no communication with other OSSs in the system. In this paper, we design a novel distributed load balancing strategy for LMSS, in which OSSs can cooperate to achieve higher performance. Such OSS modeled as an M/G/m system can replicate the objects to and balance the requests among other servers to achieve a near-optimal average waiting time (AWT) of the requests in the system. We validate the performance of the system via rigorous simulations with respect to several influencing factors and prove that our proposed strategy is scalable, flexible, and efficient for real-life applications. Index Terms--Multimedia storage system, request balancing, distributed system, average waiting time, queuing theory.
- Published
- 2008
23. Distributed Control Architecture for Self-reconfigurable Manipulators
- Author
-
Turetta, A., Casalino, G., and Sorbara, A.
- Subjects
Robots -- Design and construction ,Distributed processing (Computers) -- Research ,Robotics -- Research -- Design and construction ,Computers and office automation industries ,Engineering and manufacturing industries ,Distributed processing (Computers) ,Robot ,Design and construction ,Research - Abstract
Byline: A. Turetta (DIST -University of Genova Via Opera Pia 13 16145 Genova, Italy, turetta@dist.unige.it); G. Casalino (DIST -University of Genova Via Opera Pia 13 16145 Genova, Italy, casalino@dist.unige.it); A. [...]
- Published
- 2008
- Full Text
- View/download PDF
24. High-bandwidth data dissemination for large-scale distributed systems
- Author
-
Kostic, Dejan, Snoeren, Alex C., Vahdat, Amin, Braud, Ryan, Killian, Charles, Anderson, James W., Albrecht, Jeannie, Rodriguez, Adolfo, and Vandekieft, Erik
- Subjects
Bandwidth allocation ,Bandwidth technology ,Distributed processing (Computers) ,Network architecture ,Bandwidth -- Measurement ,Electronic data processing -- Methods ,Distributed processing (Computers) -- Research ,Network architecture -- Evaluation - Abstract
This article focuses on the multireceiver data dissemination problem. Initially, IP multicast formed the basis for efficiently supporting such distribution. More recently, overlay networks have emerged to support point-to-multipoint communication. Both techniques focus on constructing trees rooted at the source to distribute content among all interested receivers. We argue, however, that trees have two fundamental limitations for data dissemination. First, since all data comes from a single parent, participants must often continuously probe in search of a parent with an acceptable level of bandwidth. Second, due to packet losses and failures, available bandwidth is monotonically decreasing down the tree. To address these limitations, we present Bullet, a data dissemination mesh that takes advantage of the computational and storage capabilities of end hosts to create a distribution structure where a node receives data in parallel from multiple peers. For the mesh to deliver improved bandwidth and reliability, we need to solve several key problems: (i) disseminating disjoint data over the mesh, (ii) locating missing content, (iii) finding who to peer with (peering strategy), (iv) retrieving data at the right rate from all peers (flow control), and (v) recovering from failures and adapting to dynamically changing network conditions. Additionally, the system should be self-adjusting and should have few user-adjustable parameter settings. We describe our approach to addressing all of these problems in a working, deployed system across the Internet. Bullet outperforms state-of-the-art systems, including BitTorrent, by 25-70% and exhibits strong performance and reliability in a range of deployment settings. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems; H.4.3 [Information Systems Applications]: Communications Applications General Terms: Experimentation, Management, Performance Additional Key Words and Phrases: Bandwidth, overlays, peer-to-peer
- Published
- 2008
25. A generic component model for building systems software
- Author
-
Coulson, Geoff, Blair, Gordon, Grace, Paul, Taiani, Francois, Joolia, Ackbar, Lee, Kevin, Ueyama, Jo, and Sivaharan, Thirunavukkarasu
- Subjects
32-bit operating system ,64-bit operating system ,Operating system ,Real-time system ,Distributed processing (Computers) ,Operating systems -- Design and construction ,Real-time control -- Design and construction ,Real-time systems -- Design and construction ,Distributed processing (Computers) -- Research - Abstract
Component-based software structuring principles are now commonplace at the application level; but componentization is far less established when it comes to building low-level systems software. Although there have been pioneering efforts in applying componentization to systems-building, these efforts have tended to target specific application domains (e.g., embedded systems, operating systems, communications systems, programmable networking environments, or middleware platforms). They also tend to be targeted at specific deployment environments (e.g., standard personal computer (PC) environments, network processors, or microcontrollers). The disadvantage of this narrow targeting is that it fails to maximize the genericity and abstraction potential of the component approach. In this article, we argue for the benefits and feasibility of a generic yet tailorable approach to component-based systems-building that offers a uniform programming model that is applicable in a wide range of systems-oriented target domains and deployment environments. The component model, called OpenCom, is supported by a reflective runtime architecture that is itself built from components. After describing OpenCom and evaluating its performance and overhead characteristics, we present and evaluate two case studies of systems we have built using OpenCom technology, thus illustrating its benefits and its general applicability. Categories and Subject Descriptors: D.4.7 [Operating Systems]: Organization and Design--Distributed systems; real-time systems and embedded systems; D.2.6 [Software Engineering]: Programming Environments--Programmer workbench General Terms: Design, Standardization, Management Additional Key Words and Phrases: Component-based software, computer systems implementation
- Published
- 2008
26. Efficient and scalable algorithms for inferring likely invariants in distributed systems
- Author
-
Jiang, Guofei, Chen, Haifeng, and Yoshihira, Kenji
- Subjects
Algorithms -- Usage ,Distributed processing (Computers) -- Research ,Time-series analysis -- Usage ,Invariants -- Properties ,Information management -- Research ,Algorithm ,Distributed processing (Computers) ,Information accessibility ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Distributed systems generate a large amount of monitoring data such as log files to track their operational status. However, it is hard to correlate such monitoring data effectively across distributed systems and along observation time for system management. In previous work, we proposed a concept named flow intensity to measure the intensity with which internal monitoring data reacts to the volume of user requests. We calculated flow intensity measurements from monitoring data and proposed an algorithm to automatically search constant relationships between flow intensities measured at various points across distributed systems. If such relationships hold all the time, we regard them as invariants of the underlying systems. Invariants can be used to characterize complex systems and support various system management tasks. However, the computational complexity of the previous invariant search algorithm is high so that it may not scale well in large systems with thousands of measurements. In this paper, we propose two efficient but approximate algorithms for inferring invariants in large-scale systems. The computational complexity of new randomized algorithms is significantly reduced, and experimental results from a real system are also included to demonstrate the accuracy and efficiency of our new algorithms. Index Terms--Distributed systems, monitoring data, time series, data management, invariants, randomized algorithms, computational complexity.
- Published
- 2007
27. Incremental adaptive strategies over distributed networks
- Author
-
Lopes, Cassio G. and Sayed, Ali H.
- Subjects
Estimation theory -- Analysis ,Distributed processing (Computers) -- Research ,Adaptive control -- Analysis ,Distributed processing (Computers) ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
The development of an adaptive distributed strategy based on incremental techniques is discussed. The strategy is useful for addressing the problem of linear estimation in a cooperative manner.
- Published
- 2007
28. Centralized and distributed voltage control: impact on distributed generation penetration
- Author
-
Vovos, Panagis N., Kiprakis, Aristides E., Wallace, A. Robin, and Harrison, Gareth P.
- Subjects
Distributed processing (Computers) -- Research ,Electric power distribution -- Research ,Electric current regulators -- Research ,Voltage regulators -- Research ,Distributed processing (Computers) ,Business ,Electronics ,Electronics and electrical industries - Abstract
With the rapid increase in distributed generation (DG), the issue of voltage regulation in the distribution network becomes more significant, and centralized voltage control (or active network management) is one of the proposed methods. Alternative work on intelligent distributed voltage and reactive power control of DG has also demonstrated benefits in terms of the minimization of voltage variation and violations as well as the ability to connect larger generators to the distribution network. This paper uses optimal power flow to compare the two methods and shows that intelligent distributed voltage and reactive power control of the DG gives similar results to those obtained by centralized management in terms of the potential for connecting increased capacities within existing networks. Index Terms--Dispersed storage and generation, optimal power flow, power distribution planning, power generation control, voltage control.
- Published
- 2007
29. A hybrid islanding detection technique using voltage unbalance and frequency set point
- Author
-
Menon, Vivek and Nehrir, M. Hashem
- Subjects
Distributed processing (Computers) -- Research ,Electric generators -- Research ,Electric power transmission -- Research ,Distributed processing (Computers) ,Business ,Electronics ,Electronics and electrical industries - Abstract
The phenomenon of unintentional islanding occurs when a distributed generator (DG) continues to feed power into the grid when power flow from the central utility source has been interrupted. This phenomenon can result in serious injury to the linemen who are trying to fix the power outage problem. Several techniques have been proposed in the past to avoid such an occurrence. This paper first gives an overview of the dominant islanding detection techniques. Then, the principles of two of the suggested techniques are combined to obtain a new hybrid islanding detection technique for synchronously rotating DGs. Simulation results show that the proposed hybrid technique is more effective than each of the techniques used. Simulation results are given for two testbeds to verify the advantages of the proposed hybrid islanding detection technique. Index Terms--Distributed generators (DGs), islanding, microgrid (MG), point of common coupling (PCC).
- Published
- 2007
30. A stability algorithm for the dynamic analysis of inverter dominated unbalanced LV microgrids
- Author
-
Soultanis, Nikos L., Papathanasiou, Stavros A., and Hatziargyriou, Nikos D.
- Subjects
Algorithms -- Research ,Distributed processing (Computers) -- Research ,Electric inverters -- Research ,Algorithm ,Distributed processing (Computers) ,Business ,Electronics ,Electronics and electrical industries - Abstract
In this paper, an algorithm is presented, suitable for simulating the dynamic behavior of LV Microgrids both under grid connected and autonomous operation. The algorithm follows the stability approach, focusing on low-frequency dynamics, and adjusts the standard methodology so that the dynamic analysis of the system can be carried out, even in the absence of a synchronous machine when all the sources are interfaced to the network with inverters. Proper network representation allows for the modeling of all the characteristic unbalances of the LV network. The capability of the algorithm to simulate the operating modes of a Microgrid is demonstrated by representative study cases. Index Terms--Distributed generation (DG), inverter control, low-voltage (LV) networks, microgrids.
- Published
- 2007
31. An efficient algorithm for solving BCOP and implementation
- Author
-
Lin, Shieh-Shing and Chang, Huay
- Subjects
Electric power transmission -- Research ,Algorithms -- Research ,Computer networks -- Research ,Information networks -- Research ,Distributed processing (Computers) -- Research ,Distributed processing (Computers) ,Algorithm ,Business ,Electronics ,Electronics and electrical industries - Abstract
The distributed constrained state estimation (DCSE) and distributed optimal power flow (DOPF) problems are some kinds of large-scale block-additive constrained optimization problem (BCOP) in their exact formulation. In this paper, we propose a general form methodology, an efficient distributed asynchronous dual-type method, to cope with the computational difficulty of these problems. Our method achieves some attractive features, including being computationally efficient and numerically stable. The computational formulae of our method are explicit, compact, and easy to be programmed. We have demonstrated the computational efficiency of our method, by comparing with the state-of-the-art algorithm through the implementation of two real different types PC-networks in asynchronous computing environment on the IEEE 118-bus and IEEE 244-bus systems. Index Terms--Asynchronous computing environment, distributed asynchronous dual-type method, distributed constrained state estimation, distributed optimal power flow (DOPF), PC-networks.
- Published
- 2007
32. A multiagent-based dispatching scheme for distributed generators for voltage support on distribution feeders
- Author
-
Baran, Mesut E. and Markabi, Ismail M. El-
- Subjects
Distributed processing (Computers) -- Research ,Electric power distribution -- Research ,Mathematical optimization -- Research ,Electric current regulators -- Research ,Voltage regulators -- Research ,Distributed processing (Computers) ,Business ,Electronics ,Electronics and electrical industries - Abstract
This paper illustrates how a multiagent system (MAS)-based scheme can be developed for a control/optimization problem. The prototype problem considered is the dispatching of distributed generators on a distribution feeder to provide voltage support. The popular control net protocol (CNP) for MAS has been adopted to facilitate distributed control. This paper illustrates that characterization of the optimal solution is necessary in order to develop a protocol for the MAS to implement. This paper also shows that MAS facilitates a model-free control procedure, as it can monitor the local sensitivities. Test results, based on simulations on a prototype feeder, show that the proposed MAS-based control scheme is very effective in obtaining the solution for the prototype problem. The proposed method needs fast communication among the distributed generators (DGs) in order to assure fast response during emergency conditions. Communication requirements have also been identified in this paper. Index Terms--Distributed control, distributed generation (DG), optimization, power distribution, voltage regulation.
- Published
- 2007
33. Intelligent diagnostic requirements of future all-electric ship integrated power system
- Author
-
Logan, Kevin P.
- Subjects
Electric power systems -- Research ,Distributed processing (Computers) -- Research ,Intelligent control systems -- Research ,Knowledge management -- Research ,Distributed processing (Computers) ,Knowledge management ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Future ship integrated power systems (IPSs) will be characterized by complex topologies of advanced power electronics and other evolving components. Advanced capabilities, such as intelligent reconfiguration of system function and connectivity will be possible; however, system level knowledge of component failure will be needed for intelligent power distribution under failure mode conditions. Diagnostic and prognostic coverage for sensors, components, and subsystems will be essential for achieving reliability goals. This paper will look at some diagnostic requirements and emerging technologies available for insertion into future ship IPSs. Index Terms--All-electric ship, diagnostics, distributed intelligence, integrated power systems (IPSs), intelligent software agents, knowledge management, prognostics.
- Published
- 2007
34. Adaptive pre-task assignment scheduling strategy for heterogeneous distributed raytracing system
- Author
-
Qureshi, Kalim and Manuel, Paul
- Subjects
Distributed processing (Computers) -- Research ,Load balancing (Computers) -- Research ,Scheduling (Management) -- Research ,Distributed processing (Computers) ,Load balancing ,Computers ,Electronics ,Engineering and manufacturing industries - Abstract
One of the main obstacles in obtaining high performance from heterogeneous distributed computing (HDC) system is the inevitable communication overhead. This occurs when tasks executing on different computing nodes exchange data or the assigned sub-task size is very small. In this paper, we present adaptive pre-task assignment (APA) strategy for heterogeneous distributed raytracing system. In this strategy, the master assigns pre-task to the each node. The size of sub-task for each node is proportional to the node's performance. One of the main features of this strategy is that it reduces the inter-processes communication, the cost overhead of the node's idle time and load imbalance, which normally occurs in traditional runtime task scheduling (RTS) strategies. Performances of the RTS and APA strategies are evaluated on manager/master and workers model of HDC system. The experimental results of our proposed (APA) strategy have shown a significant improvement in the performance over RTS strategy. Keywords: Task partitioning and scheduling: Load balancing: Heterogeneous distributed computing; Runtime task scheduling strategy; Adaptive pre-task assignment strategy; Distributed image/raytracing computing; Performance evaluation
- Published
- 2007
35. Type-based publish/subscribe: concepts and experiences
- Author
-
Eugster, Patrick
- Subjects
Distributed processing (Computers) ,Java ,Distributed object technology ,Object-oriented programming ,Reusable code ,Computer networks -- Analysis ,Information networks -- Analysis ,Distributed processing (Computers) -- Research ,Java (Computer program language) -- Analysis ,Object-oriented programming -- Analysis - Abstract
A continuously increasing number of interconnected computer devices makes the requirement for programming abstractions for remote one-to-many interaction yet more stringent. The publish/subscribe paradigm has been advocated as a candidate abstraction for such one-to-many interaction at large scale. Common practices in publish/subscribe, however, include low-level abstractions which hardly leverage type safety, and provide only poor support for object encapsulation. This tends to put additional burden on software developers; guarantees such as the aforementioned type safety and object encapsulation become of increasing importance with an accrued number of software components, which modern applications also involve, besides an increasing number of hardware components. Type-based publish/subscribe (TPS) is a high-level variant of the publish/subscribe paradigm which aims precisely at providing guarantees such as type safety and encapsulation. We present the rationale and principles underlying TPS, as well as two implementations in Java: the first based on a specific extension of the Java language, and a second novel implementation making use of recent general-purpose features of Java, such as generics and behavioral reflection. We compare the two approaches, thereby evaluating the aforementioned features--as well as additional features which have been included in the most recent Java 1.5 release--in the context of distributed and concurrent programming. We discuss the benefits of alternative programming languages and features for implementing TPS. By revisiting alternative abstractions for distributed programming, including "classic" and recent ones, we extend our investigations to programming language support for distributed programming in general, pointing out that overall, the support in current mainstream programming languages is still insufficient. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems--Distributed applications; D. 1.5 [Programming Techniques]: Object-Oriented Programming; D.1.3 [Programming Techniques]: Concurrent Programming--Distributed programming; D.3.2 [Programming Languages]: Language Classifications--Object-oriented languages General Terms: Languages, Design Additional Key Words and Phrases: Abstraction, generics, Java, publish/subscribe, reflection, type, distribution ACM Reference Format: Eugster, P. 2007. Type-based publish/subscribe: Concepts and experiences. ACM Trans. Program. Lang. Syst. 29, 1, Article 6 (January 2007), 50 pages. DOI = 10.1145/1180475.1180481 http://doi.acre.org/10.1145/1180475.1180481.
- Published
- 2007
36. On parallelization of a spatially-explicit structured ecological model for integrated ecosystem simulation
- Author
-
Wang, Dali, Berry, Michael W., and Gross, Louis J.
- Subjects
Distributed processing (Computers) ,Parallel processing ,Distributed processing (Computers) -- Research ,Parallel processing -- Research ,Computer simulation -- Research ,Computer-generated environments -- Research - Abstract
Abstract Spatially explicit landscape population models are widely used to analyze the dynamics of an ecological species over a realistic landscape. These models may be data intensive applications when they […]
- Published
- 2006
37. Scheduling messages for data redistribution: an experimental study
- Author
-
Jeannot, Emmanuel and Wagner, Frederic
- Subjects
Distributed processing (Computers) ,Electronic data processing -- Research ,Distributed processing (Computers) -- Research - Abstract
Abstract Data redistribution has been widely studied in the literature. In recent years, several papers proposed scheduling algorithms to execute redistributions under different constraints in a minimal amount of time. […]
- Published
- 2006
38. A detailed performance analysis of the interpolation supplemented lattice Boltzmann method on the Cray T3E and Cray X1
- Author
-
Sunder, C. Shyam, Baskar, G., Babu, V., and Strenski, David
- Subjects
Distributed processing (Computers) ,Collective memory -- Research ,Multiprocessors -- Design and construction -- Research ,Distributed processing (Computers) -- Research - Abstract
Abstract A detailed study of the parallel performance of the interpolation supplemented lattice Boltzmann (ISLB) method using SHMEM and MPI on the Cray T3E-900 and Cray X1 architectures is presented. […]
- Published
- 2006
39. Distributed analysis jobs with the atlas production system
- Author
-
Gonzalez, Santiago, Liko, Dietrich, Nairz, Armin, Mair, Gregor, Orellana, Frederik, Goossens, Luc, Resconi, Silvia, and de Salvo, Alessandro
- Subjects
Distributed processing (Computers) -- Research ,Distributed processing (Computers) ,Business ,Electronics ,Electronics and electrical industries - Abstract
The Large Hadron Collider at CERN will start data acquisition in 2007. The A Torioidal LHC ApparatuS (ATLAS) experiment is preparing for the data handling and analysis via a series of data challenges and production exercises to validate its computing model to provide useful samples of data for detector and physics studies. The ATLAS production system has been successfully used to run production of simulation data at an unprecedented stale. Up to 10 000 jobs were processed by the system on about 100 sites in one day. In this paper, we discuss the experience of performing analysis jobs using this system on the LCG infrastructure. Index Terms--ATLAS, distributed analysis, grid, high energy physics.
- Published
- 2006
40. On self-healing key distribution schemes
- Author
-
Blundo, Carlo, D'Arco, Paolo, and De Santis, Alfredo
- Subjects
Distributed processing (Computers) ,Information theory -- Research ,Distributed processing (Computers) -- Research - Abstract
Self-healing key distribution schemes allow group managers to broadcast session keys to large and dynamic groups of users over unreliable channels. Roughly speaking, even if during a certain session some broadcast messages are lost due to network faults, the self-healing property of the scheme enables each group member to recover the key from the broadcast messages he has received before and after that session. Such schemes are quite suitable in supporting secure communication in wireless networks and mobile wireless ad-hoc networks. Recent papers have focused on self-healing key distribution, and have provided definitions, stated in terms of the entropy function, and some constructions. The contribution of this paper is the following. * We analyze current definitions of self-healing key distribution and, for two of them, we show that no protocol can achieve the definition. * We show that a lower bound on the size of the broadcast message, previously derived, does not hold. * We propose a new definition of self-healing key distribution, and we show that it can be achieved by concrete schemes. * We give some lower bounds on the resources required for implementing such schemes, i.e., user memory storage and communication complexity. We prove that the bounds are tight. Index Terms--Group communication, information theory, key distribution, reliability, self-healing.
- Published
- 2006
41. The distributed Karhunen-Loeve transform
- Author
-
Gastpar, Michael, Dragotti, Pier Luigi, and Vetterli, Martin
- Subjects
Digital signal processor ,Distributed processing (Computers) ,Signal processing -- Methods ,Distributed processing (Computers) -- Research - Abstract
The Karhunen-Loeve transform (KLT) is a key element of many signal processing and communication tasks. Many recent applications involve distributed signal processing, where it is not generally possible to apply the KLT to the entire signal; rather, the KLT must be approximated in a distributed fashion. This paper investigates such distributed approaches to the KLT, where several distributed terminals observe disjoint subsets of a random vector. We introduce several versions of the distributed KLT. First, a local KLT is introduced, which is the optimal solution for a given terminal, assuming all else is fixed. This local KLT is different and in general improves upon the marginal KLT which simply ignores other terminals. Both optimal approximation and compression using this local KLT are derived. Two important special cases are studied in detail, namely, the partial observation KLT which has access to a subset of variables, but aims at reconstructing them all, and the conditional KLT which has access to side information at the decoder. We focus on the jointly Gaussian case, with known correlation structure, and on approximation and compression problems. Then, the distributed KLT is addressed by considering local KLTs in turn at the various terminals, leading to an iterative algorithm which is locally convergent, sometimes reaching a global optimum, depending on the overall correlation structure. For compression, it is shown that the classical distributed source coding techniques admit a natural transform coding interpretation, the transform being the distributed KLT. Examples throughout illustrate the performance of the proposed distributed KLT. This distributed transform has potential applications in sensor networks, distributed image databases, hyper-spectral imagery, and data fusion. Index Terms--Distributed source coding, distributed transforms, rate--distortion function, principal components analysis, side information, transform coding.
- Published
- 2006
42. Speculative execution in a distributed file system
- Author
-
Nightingale, Edmund B., Chen, Peter M., and Flinn, Jason
- Subjects
32-bit operating system ,64-bit operating system ,Operating system ,Distributed processing (Computers) ,Operating system enhancement ,Company business management ,Computer files -- Management ,Operating systems -- Research ,Distributed processing (Computers) -- Research ,Operating system enhancements -- Research - Abstract
Speculator provides Linux kernel support for speculative execution. It allows multiple processes to share speculative state by tracking causal dependencies propagated through interprocess communication. It guarantees correct execution by preventing speculative processes from externalizing output, for example, sending a network message or writing to the screen, until the speculations on which that output depends have proven to be correct. Speculator improves the performance of distributed file systems by masking I/O latency and increasing I/O throughput. Rather than block during a remote operation, a file system predicts the operation's result, then uses Speculator to checkpoint the state of the calling process and speculatively continue its execution based on the predicted result. If the prediction is correct, the checkpoint is discarded; if it is incorrect, the calling process is restored to the checkpoint, and the operation is retried. We have modified the client, server, and network protocol of two distributed file systems to use Speculator. For PostMark and Andrew-style benchmarks, speculative execution results in a factor of 2 performance improvement for NFS over local area networks and an order of magnitude improvement over wide area networks. For the same benchmarks, Speculator enables the Blue File System to provide the consistency of single-copy file semantics and the safety of synchronous I/O, yet still outperform current distributed file systems with weaker consistency and safety. Categories and Subject Descriptors: D.4.3 [Operating Systems]: File Systems Management--Distributed file systems; D.4.7 [Operating Systems]: Organization and Design; D.4.8 [Operating Systems]: Performance General Terms: Performance, Design Additional Key Words and Phrases: Distributed file systems, speculative execution, causality
- Published
- 2006
43. Performance of the distributed central analysis in BaBar
- Author
-
Khan, A., Mommsen, R.K., Gradl, W., Fritsch, M., Petzold, A., Roethel, W., and Smith, D.A.
- Subjects
Distributed processing (Computers) -- Research ,Electronic data processing -- Research ,Distributed processing (Computers) ,Business ,Electronics ,Electronics and electrical industries - Abstract
The total dataset produced by the BaBar experiment at the Stanford Linear Accelerator Center (SLAC) currently comprises roughly 3 x [10.sup.9] data events and an equal amount of simu. lated events, corresponding to 23 Tbytes of real data and 51 Tbytes simulated events. Since individual analyses typically select a very small fraction of all events, it would be extremely inefficient if each analysis had to process the full dataset. A first, centrally managed analysis step is therefore a common pre-selection ('skimming') of all data according to very loose, inclusive criteria to facilitate data access for later analysis. Usually, there are common selection criteria for several analysis. However, they may change over time, e.g., when new analyses are developed. Currently, [??] (100) such pre-selection streams ('skims') are defined. In order to provide timely access to newly created or modified skims, it is necessary to process the complete dataset several times a year. Additionally, newly taken or simulated data has to be skimmed as it becomes available. The system currently deployed for skim production is using 1800 CPUs distributed over three production sites. It was possible to process the complete dataset within about 3.5 months. We report on the stability and the performance of the system. Index Terms--Data handling, data management, data processing, distributed computing.
- Published
- 2006
44. Optimal rejuvenation scheduling of distributed computation based on dynamic programming
- Author
-
Okamura, Hiroyuki, Iwamoto, Kazuki, and Dohi, Tadashi
- Subjects
Dynamic programming -- Usage ,Distributed processing (Computers) -- Research ,Software maintenance -- Research ,Software -- Maintenance and repair ,Software -- Research ,Distributed processing (Computers) ,Software maintenance ,Computers - Abstract
Abstract: Recently, a complementary approach to handle transient software failures, called software rejuvenation, is becoming popular as a proactive fault management technique in operational software systems. In this study, we [...]
- Published
- 2006
45. Reliability evaluation of distributed computer systems subject to imperfect coverage and dependent common-cause failures
- Author
-
Xing, Liudong and Shrestha, Akhilesh
- Subjects
Distributed processing (Computers) -- Research ,Decision tree -- Research ,Distributed processing (Computers) ,Computers - Abstract
Abstract: Imperfect coverage (IPC) occurs when a malicious component failure causes extensive damage due to inadequate fault detection, fault location or fault recovery. Common-cause failures (CCF) are multiple dependent component [...]
- Published
- 2006
46. Autonomic and dependable computing: moving towards a model-driven approach
- Author
-
Dai, Yuan-Shun, Marshall, Tom, and Guan, Xiaohong
- Subjects
Distributed processing (Computers) -- Models ,Distributed processing (Computers) -- Research ,Distributed processing (Computers) ,Computers - Abstract
Abstract: The rapidly increasing complexity of computing systems is driving the movement towards autonomic systems that are capable of managing themselves without the need for human intervention. Without autonomic technologies, [...]
- Published
- 2006
47. On the performance of wide-area thin-client computing
- Author
-
Lai, Albert M. and Nieh, Jason
- Subjects
Algorithm ,Distributed processing (Computers) ,Performance improvement ,Algorithms -- Usage ,Distributed processing (Computers) -- Research - Abstract
While many application service providers have proposed using thin-client computing to deliver computational services over the Internet, little work has been done to evaluate the effectiveness of thin-client computing in a wide-area network. To assess the potential of thin-client computing in the context of future commodity high-bandwidth Internet access, we have used a novel, noninvasive slow-motion benchmarking technique to evaluate the performance of several popular thin-client computing platforms in delivering computational services cross-country over Internet2. Our results show that using thin-client computing in a wide-area network environment can deliver acceptable performance over Internet2, even when client and server are located thousands of miles apart on opposite ends of the country. However, performance varies widely among thin-client platforms and not all platforms are suitable for this environment. While many thin-client systems are touted as being bandwidth efficient, we show that network latency is often the key factor in limiting wide-area thin-client performance. Furthermore, we show that the same techniques used to improve bandwidth efficiency often result in worse overall performance in wide-area networks. We characterize and analyze the different design choices in the various thin-client platforms and explain which of these choices should be selected for supporting wide-area computing services. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems; C.4 [Performance of Systems]:--Measurement techniques General Terms: Performance, Measurement, Experimentation, Algorithms Additional Key Words and Phrases: Thin-client, wide-area networks, Internet2, slow-motion benchmarking
- Published
- 2006
48. An approach to feature location in distributed systems
- Author
-
Edwards, Dennis, Simmons, Sharon, and Wilde, Norman
- Subjects
Distributed processing (Computers) ,Company business management ,Distributed processing (Computers) -- Management ,Distributed processing (Computers) -- Research ,Causality (Physics) -- Methods - Published
- 2006
49. A multimodal interface to control a robot arm via the Web: a case study on remote programming
- Author
-
Marin, Raul, Sanz, Pedro J., Nebot, P., and Wirz, R.
- Subjects
Distributed processing (Computers) -- Research ,Robotics -- Research ,User interface -- Research ,Distributed processing (Computers) ,User interface ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
In this paper, we present the user interface and the system architecture of an Internet-based telelaboratory, which allows researchers and students to remotely control and program two educational online robots. In fact, the challenge has been to demonstrate that remote programming combined with an advanced multimedia user interface for remote control is very suitable, flexible, and profitable for the design of a telelaboratory. The user interface has been designed by using techniques based on augmented reality and nonimmersive virtual reality, which enhance the way operators get/put information from/to the robotic scenario. Moreover, the user interface provides the possibility of letting the operator manipulate the remote environment by using multiple ways of interaction (i.e., from the simplification of the natural language to low-level remote programming). In fact, the paper focuses on the lowest level of interaction between the operator and the robot, which is remote programming. As explained in the paper, the system architecture permits any external program (i.e., remote experiment, speech-recognition module, etc.) to have access to almost every feature of the telelaboratory (e.g., cameras, object recognition, robot control, etc.). The system validation was performed by letting 40 Ph.D. students within the 'European Robotics Research Network Summer School on Internet and Online Robots for Telemanipulation' workshop (Benicassim, Spain, 2003) program several telemanipulation experiments with the telelaboratory. Some of these experiments are shown and explained in detail. Finally, the paper focuses on the analysis of the network performance for the proposed architecture (i.e., time delay). In fact, several configurations are tested through various networking protocols (i.e., Remote Method Invocation, Transmission Control Protocol/IP, User Datagram Protocol/IP). Results show the real possibilities offered by these remote-programming techniques, in order to design experiments intended to be performed from both home and the campus. Index Terms--Distributed systems, education and training, human-robot interaction, remote programming, robotics, telelabs.
- Published
- 2005
50. Fast neural network ensemble learning via negative-correlation data correction
- Author
-
Chan, Zeke S.H. and Kasabov, Nik
- Subjects
Neural networks -- Methods ,Distributed processing (Computers) -- Research ,Neural network ,Distributed processing (Computers) ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
This letter proposes a new negative correlation (NC) learning method that is both easy to implement and has the advantages that: 1) it requires much lesser communication overhead than the standard NC method and 2) it is applicable to ensembles of heterogenous networks. Index Terms--Distributed computing, ensemble learning, negative correlation (NC) learning.
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.