61 results
Search Results
2. Intel Software Guard Extensions Applications: A Survey.
- Author
-
WILL, NEWTON C. and MAZIERO, CARLOS A.
- Subjects
- *
DATA privacy , *DATA integrity , *COMPUTER systems , *TRUST , *COMPUTER software - Abstract
Data confidentiality is a central concern in modern computer systems and services, as sensitive data from users and companies are being increasingly delegated to such systems. Several hardware-based mechanisms have been recently proposed to enforce security guarantees of sensitive information. Hardware-based isolated execution environments are a class of such mechanisms, in which the operating system and other low-level components are removed from the trusted computing base. One of such mechanisms is the Intel Software Guard Extensions (Intel SGX), which creates the concept of enclave to encapsulate sensitive components of applications and their data. Despite being largely applied in several computing areas, SGX has limitations and performance issues that must be addressed for the development of secure solutions. This text brings a categorized literature review of the ongoing research on the Intel SGX architecture, discussing its applications and providing a classification of the solutions that take advantage of SGX mechanisms. We analyze and categorize 293 papers that rely on SGX to provide integrity, confidentiality, and privacy to users and data, regarding different contexts and goals. We also discuss research challenges and provide future directions in the field of enclaved execution, particularly when using SGX. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Securing Interruptible Enclaved Execution on Small Microprocessors.
- Author
-
BUSI, MATTEO, NOORMAN, JOB, VAN BULCK, JO, GALLETTA, LETTERIO, DEGANO, PIERPAOLO, MÜHLBERG, JAN TOBIAS, and PIESSENS, FRANK
- Subjects
- *
COMPUTER systems , *PROGRAMMING languages , *MICROPROCESSORS , *LANGUAGE research - Abstract
Computer systems often provide hardware support for isolation mechanisms such as privilege levels, virtual memory, or enclaved execution. Over the past years, several successful software-based side-channel attacks have been developed that break, or at least significantly weaken, the isolation that these mechanisms offer. Extending a processor with new architectural or micro-architectural features brings a risk of introducing new software-based side-channel attacks. This article studies the problem of extending a processor with new features without weakening the security of the isolation mechanisms that the processor offers. Our solution is heavily based on techniques from research on programming languages. More specifically, we propose to use the programming language concept of full abstraction as a general formal criterion for the security of a processor extension. We instantiate the proposed criterion to the concrete case of extending a microprocessor that supports enclaved execution with secure interruptibility. This is a very relevant instantiation, as several recent papers have shown that interruptibility of enclaves leads to a variety of software-based side-channel attacks. We propose a design for interruptible enclaves and prove that it satisfies our security criterion. We also implement the design on an open-source enclave-enabled microprocessor and evaluate the cost of our design in terms of performance and hardware size. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. A Survey on Multithreading Alternatives for Soft Error Fault Tolerance.
- Author
-
Oz, Isil and Arslan, Sanem
- Subjects
- *
FAULT-tolerant computing , *COMPUTER systems , *SOFT errors , *COMPUTER engineering , *MICROPROCESSORS , *ERROR rates , *REDUNDANCY in engineering - Abstract
Smaller transistor sizes and reduction in voltage levels in modern microprocessors induce higher soft error rates. This trend makes reliability a primary design constraint for computer systems. Redundant multithreading (RMT) makes use of parallelism in modern systems by employing thread-level time redundancy for fault detection and recovery. RMT can detect faults by running identical copies of the program as separate threads in parallel execution units with identical inputs and comparing their outputs. In this article, we present a survey of RMT implementations at different architectural levels with several design considerations. We explain the implementations in seminal papers and their extensions and discuss the design choices employed by the techniques. We review both hardware and software approaches by presenting the main characteristics and analyze the studies with different design choices regarding their strengths and weaknesses. We also present a classification to help potential users find a suitable method for their requirement and to guide researchers planning to work on this area by providing insights into the future trend. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Abstraction and approximation in fuzzy temporal logics and models.
- Author
-
Sotudeh, Gholamreza and Movaghar, Ali
- Subjects
- *
ABSTRACT thought , *APPROXIMATION theory , *FUZZY systems , *SYSTEMS design , *COMPUTER systems , *GENERALIZATION - Abstract
Recently, by defining suitable fuzzy temporal logics, temporal properties of dynamic systems are specified during model checking process, yet a few numbers of fuzzy temporal logics along with capable corresponding models are developed and used in system design phase, moreover in case of having a suitable model, it suffers from the lack of a capable model checking approach. Having to deal with uncertainty in model checking paradigm, this paper introduces a fuzzy Kripke model (FzKripke) and then provides a verification approach using a novel logic called Fuzzy Computation Tree Logic* (FzCTL*). Not only state space explosion is handled using well-known concepts like abstraction and bisimulation, but an approximation method is also devised as a novel technique to deal with this problem. Fuzzy program graph, a generalization of program graph and FzKripke, is also introduced in this paper in consideration of higher level abstraction in model construction. Eventually modeling, and verification of a multi-valued flip-flop is studied in order to demonstrate capabilities of the proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. Safe abstractions of data encodings in formal security protocol models.
- Author
-
Pironti, Alfredo and Sisto, Riccardo
- Subjects
- *
COMPUTER security , *COMPUTER network protocols , *DATA encryption , *SEQUENTIAL codes , *COMPUTER systems , *AUTOMATION , *COMPUTER software - Abstract
When using formal methods, security protocols are usually modeled at a high level of abstraction. In particular, data encoding and decoding transformations are often abstracted away. However, if no assumptions at all are made on the behavior of such transformations, they could trivially lead to security faults, for example leaking secrets or breaking freshness by collapsing nonces into constants. In order to address this issue this paper formally states sufficient conditions, checkable on sequential code, such that if an abstract protocol model is secure under a Dolev-Yao adversary, then a refined model, which takes into account a wide class of possible implementations of the encoding/decoding operations, is implied to be secure too under the same adversary model. The paper also indicates possible exploitations of this result in the context of methods based on formal model extraction from implementation code and of methods based on automated code generation from formally verified models. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
7. DBSCAN Revisited, Revisited: Why and How You Should (Still) Use DBSCAN.
- Author
-
SCHUBERT, ERICH, SANDER, JÖRG, ESTER, MARTIN, KRIEGEL, HANS-PETER, and XIAOWEI XU
- Subjects
- *
COMPUTER systems , *INFORMATION storage & retrieval systems , *ALGORITHMS , *COMPUTATIONAL complexity , *INDEXES , *COMPUTER network resources - Abstract
At SIGMOD 2015, an article was presented with the title "DBSCAN Revisited: Mis-Claim, Un-Fixability, and Approximation" that won the conference's best paper award. In this technical correspondence, we want to point out some inaccuracies in the way DBSCAN was represented, and why the criticism should have been directed at the assumption about the performance of spatial index structures such as R-trees and not at an algorithm that can use such indexes. We will also discuss the relationship of DBSCAN performance and the indexability of the dataset, and discuss some heuristics for choosing appropriate DBSCAN parameters. Some indicators of bad parameters will be proposed to help guide future users of this algorithm in choosing parameters such as to obtain both meaningful results and good performance. In new experiments, we show that the new SIGMOD 2015 methods do not appear to offer practical benefits if the DBSCAN parameters are well chosen and thus they are primarily of theoretical interest. In conclusion, the original DBSCAN algorithm with effective indexes and reasonably chosen parameter values performs competitively compared to the method proposed by Gan and Tao. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
8. Z2SAL: a translation-based model checker for Z.
- Author
-
Derrick, John, North, Siobhán, and Simons, Anthony J. H.
- Subjects
- *
SOFTWARE verification , *COMPUTER simulation , *CONFIRMATION (Logic) , *CHECKER variations , *COMPUTER systems - Abstract
Despite being widely known and accepted in industry, the Z formal specification language has not so far been well supported by automated verification tools, mostly because of the challenges in handling the abstraction of the language. In this paper we discuss a novel approach to building a model-checker for Z, which involves implementing a translation from Z into SAL, the input language for the Symbolic Analysis Laboratory, a toolset which includes a number of model-checkers and a simulator. The Z2SAL translation deals with a number of important issues, including: mapping unbounded, abstract specifications into bounded, finite models amenable to a BDD-based symbolic checker; converting a non-constructive and piecemeal style of functional specification into a deterministic, automaton-based style of specification; and supporting the rich set-based vocabulary of the Z mathematical toolkit. This paper discusses progress made towards implementing as complete and faithful a translation as possible, while highlighting certain assumptions, respecting certain limitations and making use of available optimisations. The translation is illustrated throughout with examples; and a complete working example is presented, together with performance data. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
9. Deductive verification of alternating systems.
- Author
-
Matteo Slanina, Henny Sipma, and Zohar Manna
- Subjects
- *
COMPUTER software , *COMPUTER systems , *COMPUTER network protocols , *COMPUTER networks - Abstract
Abstract Alternating systems are models of computer programs whose behavior is governed by the actions of multiple agents with, potentially, different goals. Examples include control systems, resource schedulers, security protocols, auctions and election mechanisms. Proving properties about such systems has emerged as an important new area of study in formal verification, with the development of logical frameworks such as the alternating temporal logic ATL*. Techniques for model checking ATL* over finite-state systems have been well studied, but many important systems are infinite-state and thus their verification requires, either explicitly or implicitly, some form of deductive reasoning. This paper presents a theoretical framework for the analysis of alternating infinite-state systems. It describes models of computation, of various degrees of generality, and alternating-time logics such as ATL* and its variations. It then develops a proof system that allows to prove arbitrary ATL* properties over these infinite-state models. The proof system is shown to be complete relative to validities in the weakest possible assertion language. The paper then derives auxiliary proof rules and verification diagrams techniques and applies them to security protocols, deriving a new formal proof of fairness of a multi-party contract signing protocol where the model of the protocol and of the properties contains both game-theoretic and infinite-state (parameterized) aspects. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
10. Interactive tool support for CSP || B consistency checking.
- Author
-
Neil Evans and Helen Treharne
- Subjects
- *
COMPUTER systems , *AUTOMATION , *ELECTRONIC systems , *SEMANTICS - Abstract
Abstract  CSP || B is an integration of two well known formal notations: CSP and B. It provides a method for modelling systems with both complex state (described in B machines) and control flow (described as CSP processes). Consistency checking within this approach verifies that a controller process never calls a B operation outside its precondition. Otherwise the behaviour of the operation cannot be predicted. In previous work, this check was carried out by manually decomposing the model before preprocessing the CSP processes to perform a hand-written weakest precondition proof. In this paper, a framework is described that mechanises consistency checking in a theorem prover and removes the need for preprocessing. This work is based on an existing PVS embedding of the CSP traces model, but it is extended by introducing a notion of state so that the interaction between processes and machines can be analysed. Numerous rules have been defined (and proved) which enable consistency checking and decomposition via PVS proof. These rules also formally justify the relaxation of previous constraints on CSP || B architectures, thereby widening the scope of CSP || B modelling. The PVS embedding and rules presented in this paper are not only applicable to CSP || B specifications, but to other combined approaches which use a non-blocking semantics for the state-based operations. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
11. Editorial Policy.
- Author
-
Wasserman, Anthony I.
- Subjects
- *
PERIODICALS , *JOURNALISM , *COMPUTER programming , *COMPUTER architecture , *COMPUTER systems , *DATA encryption , *COMPUTER music ,EDITORIALS - Abstract
The article focuses on the editorial policy of the periodical "acm computing surveys." In the late 1960's members of the Association for Computing Machinery (ACM) perceived a need among its members for a journal that would help practitioners stay abreast of the rapidly evolving field. The periodical began publication in March 1969 with professor Bill Dorn as the editor in chief, responsible for refereeing and accepting papers. In 1973 an informal editorial panel was established and two years later this panel was formalized as an editorial board. There have since been special issues on programming, computer architecture, database management, software reliability, parallel processors, queuing network models of computer system performance, graphic standards, encryption, microprogramming, user psychology and computer music. The periodical has established a surveyors forum to allow readers to note errors and comment on papers published. The periodical publishes any survey or tutorial paper of work on any topic of interest to ACM, subject to the editorial board's reviewing standards.
- Published
- 1985
12. Associative and Parallel Processors.
- Author
-
Thurber, Kenneth J. and Wald, Leon D.
- Subjects
- *
COMPUTER architecture , *ARRAY processors , *COMPUTER software , *PARALLEL processing , *ILLIAC computer , *COMPUTER systems - Abstract
This paper is a tutorial survey of the area of parallel and associative processors. The paper covers the main design tradeoffs and major architectures of SIMD (Single Instruction Stream Multiple Data Stream) systems. Summaries of ILLIAC IV, STARAN, OMEN, and PEPE, the major SIMD processors, are included. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
13. The Growth of Interest in Microprogramming: A Literature Survey.
- Author
-
Wilkes, M. V.
- Subjects
- *
MICROPROGRAMMING , *COMPUTER programming , *DIGITAL computer simulation , *COMPUTER software , *COMPUTER systems - Abstract
The literature is surveyed beginning with the first paper published in 1951. At that time microprogramming was proposed primarily as a means for designing the control unit of an otherwise conventional digital computer, although the possible use of a read/write control memory was noted. The survey reveals the way in which interest has successively developed in the following aspects of the subject: stored logic, the application of microprogramming to the design of a range of computers, emulation, microprogramming in support of software, and read/write control memories. The bibliography includes 55 papers. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
14. Program equivalence by circular reasoning.
- Author
-
Lucanu, Dorel and Rusu, Vlad
- Subjects
- *
PROGRAMMING language semantics , *EQUIVALENCE (Linguistics) , *COMPUTER programming , *COMPUTER systems , *COMPUTER software research - Abstract
We propose a logic and a deductive system for stating and automatically proving the equivalence of programs written in languages having a rewriting-based operational semantics. The chosen equivalence is parametric in a so-called observation relation, and it says that two programs satisfying the observation relation will inevitably be, in the future, in the observation relation again. This notion of equivalence generalises several well-known equivalences and is appropriate for deterministic (or, at least, for confluent) programs. The deductive system is circular in nature and is proved sound and weakly complete; together, these results say that, when it terminates, our system correctly solves the given program-equivalence problem. We show that our approach is suitable for proving equivalence for terminating and non-terminating programs as well as for concrete and symbolic programs. The latter are programs in which some statements or expressions are symbolic variables. By proving the equivalence between symbolic programs, one proves the equivalence of (infinitely) many concrete programs obtained by replacing the variables by concrete statements or expressions. The approach is illustrated by proving program equivalence in two languages from different programming paradigms. The examples in the paper, as well as other examples, can be checked using an online tool. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Proof-based verification approaches for dynamic properties: application to the information system domain.
- Author
-
Mammar, Amel and Frappier, Marc
- Subjects
- *
INFORMATION storage & retrieval systems , *B method (Computer science) , *PROGRAMMING languages , *COMPUTER systems , *COMPUTER algorithms - Abstract
This paper proposes a formal approach for generating necessary and sufficient proof obligations to demonstrate a set of dynamic properties using the B method. In particular, we consider reachability, non-interference and absence properties. Also, we show that these properties permit a wide range of property patterns introduced by Dwyer to be expressed. An overview of a tool supporting these approaches is also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. The mechanical generation of fault trees for reactive systems via retrenchment I: combinational circuits.
- Author
-
Banach, Richard and Bozzano, Marco
- Subjects
- *
FAULT trees (Reliability engineering) , *COMPUTER systems , *COMBINATIONAL circuits , *LOGIC circuits , *COMPUTER circuits - Abstract
The manual construction of fault trees for complex systems is an error-prone and time-consuming activity, encouraging automated techniques. In this paper we show how the retrenchment approach to formal system model evolution can be developed into a versatile structured approach for the mechanical construction of fault trees. The system structure and the structure of retrenchment concessions interact to generate fault trees with appropriately deep nesting. We show how this approach can be extended to deal with minimisation, thereby diminishing the post hoc subsumption workload and potentially rendering some infeasible cases feasible. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
17. Families of Algorithms for Reducing a Matrix to Condensed Form.
- Author
-
VAN ZEE, FIELD G., VAN DE GEIJN, ROBERT A., QUINTANA-ORTÍ, GREGORIO, and ELIZONDO, G. JOSEPH
- Subjects
- *
COMPUTER storage devices , *COMPUTER input-output equipment , *MATRICES (Mathematics) , *COMPUTER architecture , *COMPUTER systems - Abstract
In a recent paper it was shown how memory traffic can be diminished by reformulating the classic algorithm for reducing a matrix to bidiagonal form, a preprocess when computing the singular values of a dense matrix. The key is a reordering of the computation so that the most memory-intensive operations can be "fused." In this article, we show that other operations that reduce matrices to condensed form (reduction to upper Hessenberg form and reduction to tridiagonal form) can be similarly reorganized, yielding different sets of operations that can be fused. By developing the algorithms with a common framework and notation, we facilitate the comparing and contrasting of the different algorithms and opportunities for optimization on sequential architectures. We discuss the algorithms, develop a simple model to estimate the speedup potential from fusing, and showcase performance improvements consistent with the what the model predicts. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
18. Automated verification and refinement for physical-layer protocols.
- Author
-
Brown, Geoffrey and Pike, Lee
- Subjects
- *
COMPUTER network protocols , *COMPUTER systems , *TRANSMITTERS (Communication) , *LINE receivers (Integrated circuits) , *SIGNAL theory , *SYSTEMS design , *REAL-time control - Abstract
This paper demonstrates how to use a satisfiability modulo theories (SMT) solver together with a bounded model checker to verify properties of real-time physical layer protocols. The method is first used to verify the Biphase Mark protocol, a protocol that has been verified numerous times previously, allowing for a comparison of results. The techniques are extended to the 8N1 protocol used in universal asynchronous receiver transmitters. We then demonstrate the use of temporal refinement to link a finite state specification of 8N1 with its real-time implementation. This refinement relationship relieves a significant disadvantage of SMT approaches-their inability to scale to large problems. Finally, capturing the impact of metastability on timing requirements is a key issue in modeling physical-layer protocols. Rather than model metastability directly, a contribution of our models is treating its effect as a constraint on non-determinism. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
19. A process algebraic framework for specification and validation of real-time systems.
- Author
-
Sherif, Adnan, Cavalcanti, Ana, He Jifeng, and Sampaio, Augusto
- Subjects
- *
PROGRAMMING languages , *COMPUTER systems , *GALOIS theory , *ALGEBRAIC functions , *MATHEMATICAL analysis - Abstract
Following the trend to combine techniques to cover several facets of the development of modern systems, an integration of Z and CSP, called Circus, has been proposed as a refinement language; its relational model, based on the unifying theories of programming (UTP), justifies refinement in the context of both Z and CSP. In this paper, we introduce Circus Time, a timed extension of Circus, and present a new UTP time theory, which we use to give semantics to Circus Time and to validate some of its laws. In addition, we provide a framework for validation of timed programs based on FDR, the CSP model-checker. In this technique, a syntactic transformation strategy is used to split a timed program into two parallel components: an untimed program that uses timer events, and a collection of timers. We show that, with the timer events, it is possible to reason about time properties in the untimed language, and so, using FDR. Soundness is established using a Galois connection between the untimed UTP theory of Circus (and CSP) and our time theory. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
20. Relational concurrent refinement part II: Internal operations and outputs.
- Author
-
Boiten, Eerke, Derrick, John, and Schellhorn, Gerhard
- Subjects
- *
SEMANTICS , *ABSTRACT data types (Computer science) , *TECHNICAL specifications , *MECHANIZATION , *COMPUTER systems - Abstract
Two styles of description arise naturally in formal specification: state-based and behavioural. In state-based notations, a system is characterised by a collection of variables, and their values determine which actions may occur throughout a system history. Behavioural specifications describe the chronologies of actions—interactions between a system and its environment. The exact nature of such interactions is captured in a variety of semantic models with corresponding notions of refinement; refinement in state based systems is based on the semantics of sequential programs and is modelled relationally. Acknowledging that these viewpoints are complementary, substantial research has gone into combining the paradigms. The purpose of this paper is to do three things. First, we survey recent results linking the relational model of refinement to the process algebraic models. Specifically, we detail how variations in the relational framework lead to relational data refinement being in correspondence with traces–divergences, singleton failures and failures–divergences refinement in a process semantics. Second, we generalise these results by providing a general flexible scheme for incorporating the two main “erroneous” concurrent behaviours: deadlock and divergence, into relational refinement. This is shown to subsume previous characterisations. In doing this we derive relational refinement rules for specifications containing both internal operations and outputs that corresponds to failures–divergences refinement. Third, the theory has been formally specified and verified using the interactive theorem prover KIV. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
21. Model checking action system refinements.
- Author
-
Smith, Graeme and Winter, Kirsten
- Subjects
- *
COMPUTER simulation , *PROGRAMMING languages , *COMPUTER systems , *PREDICATE calculus , *CALCULUS - Abstract
Action systems provide a formal approach to modelling parallel and reactive systems. They have a well established theory of refinement supported by simulation-based proof rules. This paper introduces an automatic approach for verifying action system refinements utilising standard CTL model checking. To do this, we encode each of the simulation conditions as a simulation machine, a Kripke structure on which the proof obligation can be discharged by checking that an associated CTL property holds. This procedure transforms each simulation condition into a model checking problem. Each simulation condition can then be model checked in isolation, or, if desired, together with the other simulation conditions by combining the simulation machines and the CTL properties. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
22. Cache Efficient Bidiagonalization Using BLAS 2.5 Operators.
- Author
-
HOWELL, GARY W., DEMMEL, JAMES W., FULTON, CHARLES T., HAMMARLING, SVEN, and MARMOL, KAREN
- Subjects
- *
COMPUTER systems , *COMPUTER input-output equipment , *COMPUTER science , *COMPUTER architecture , *COMPUTER algorithms , *MATRICES (Mathematics) , *COMPUTER programming - Abstract
On cache based computer architectures using current standard algorithms, Householder bidiagonalization requires a significant portion of the execution time for computing matrix singular values and vectors. In this paper we reorganize the sequence of operations for Householder bidiagonalization of a general m x n matrix, so that two (_GEMV) vector-matrix multiplications can be done with one pass of the unreduced trailing part of the matrix through cache. Two new BLAS operations approximately cut in half the transfer of data from main memory to cache, reducing execution times by up to 25 per cent. We give detailed algorithm descriptions and compare timings with the current LAPACK bidiagonalization algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
23. Reachability analysis of fragments of mobile ambients in AC term rewriting.
- Author
-
Giorgio Delzanno and Roberto Montagna
- Subjects
- *
REWRITING systems (Computer science) , *COMPUTER systems , *COMPUTER simulation , *CALCULUS - Abstract
Abstract In this paper, we investigate the connection between fragments of associative-commutative Term Rewriting and fragments of Mobile Ambients, a powerful model for mobile and distributed computations. The connection can be used to transfer decidability and undecidability results for important computational properties like reachability from one formalism to the other. Furthermore, it can be viewed as a vehicle to apply tools based on rewriting for the simulation and validation of specifications given in Mobile Ambients. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
24. Cost and Precision Tradeoffs of Dynamic Data Slicing Algorithms.
- Author
-
Xiangyu Zhang, Gupta, Rajiv, and Youtao Zhang
- Subjects
- *
ALGORITHMS , *COMPUTER software quality control , *DYNAMIC data exchange , *COMPUTER systems , *ALGEBRA , *ELECTRONIC systems - Abstract
Dynamic slicing algorithms are used to narrow the attention of the user or an algorithm to a relevant subset of executed program statements. Although dynamic slicing was first introduced to aid in user level debugging, increasingly applications aimed at improving software quality, reliability, security, and performance are finding opportunities to make automated use of dynamic slicing. In this paper we present the design and evaluation of three precise dynamic data slicing algorithms called the full preprocessing (FP), no preprocessing (NP) and limited preprocessing (LP) algorithms. The algorithms differ in the relative timing of constructing the dynamic data dependence graph and its traversal for computing requested dynamic data slices. Our experiments show that the LP algorithm is a fast and practical precise data slicing algorithm. In fact we show that while precise data slices can be orders of magnitude smaller than imprecise dynamic data slices, for small number of data slicing requests, the LP algorithm is faster than an imprecise dynamic data slicing algorithm proposed by Agrawal and Horgan. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
25. A Practical and Fast Iterative Algorithm for ø-Function Computation Using DJ Graphs.
- Author
-
Das, Dibyendu and Ramakrishna, U.
- Subjects
- *
ALGORITHMS , *ITERATIVE methods (Mathematics) , *COMPUTER programming , *COMPUTER systems , *MATHEMATICAL variables , *MATHEMATICAL programming - Abstract
We present a new and practical method of computing φ-function for all variables in a function for Static Single Assignment (SSA) form. The new algorithm is based on computing the Merge set of each node in the control flow graph of a function (a node here represents a basic block and the terms will be used interchangeably). Merge set of a node n is the set of nodes N, where φ-functions may need to be placed if variables are defined in n. It is not necessary for n to have a definition of a variable in it. Thus, the merge set of n is dictated by the underlying structure of the CFG. The new method presented here precomputes the merge set of every node in the CFG using an iterative approach. Later, these merge sets are used to carry out the actual φ-function placement. The advantages are in examples where dense definitions of variables are present (i.e., original definitions of variables-user defined or otherwise, in a majority of basic blocks). Our experience with SSA in the High Level Optimizer (optimization levels +O31+O4) shows that most examples from the Spec2000 benchmark suite require a high percentage of basic blocks to have their φ points computed. Previous methods of computing the same relied on the dominance frontier (DF) concept, first introduced by Cytron et al. The method presented in this paper gives a new effective iterative solution to the problem. Also, in cases, where the control flow graph does not change, our method does not require any additional computation for new definitions introduced as part of optimizations. We present implementation details with results from Spec2000 benchmarks. Our algorithm runs faster than the existing methods used. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
26. SURVEYORS' FORUM.
- Author
-
Fernandez, E. B., Won Kim, and Friesen, Oris D.
- Subjects
- *
LETTERS , *DATABASE management , *INFORMATION storage & retrieval systems , *ELECTRONIC data processing , *COMPUTER systems , *INFORMATION technology , *ELECTRONIC systems - Abstract
This article presents letters relate to database management published in the September 1979 issue of the journal "Computing Surveys." The journal received several letters pointing out omissions in the research paper on relational databases published in the September 1979 issue of the journal. This article provides a table similar to that used in the research paper to summarize the status of nine reported systems. One of the letter displays table that outlines nine relational database systems which escaped the survey in the September issue. In the response letter by researcher Won Kim, he explains that one of the nine systems escaped his research. One of the letter is related to MULTICS relational Data Store (MRDS) database management system that became commercially available in 1975. The access method utilized by MRDS is indexed sequentially and augmented with secondary indices. The input language is a relationally complete relational calculus. Input expressions are converted internally.
- Published
- 1980
- Full Text
- View/download PDF
27. Guest Editor's Overview . . .
- Author
-
Denning, Peter J.
- Subjects
- *
COMPUTER programming , *RULES , *COMPUTERS , *GUIDELINES , *CREATIVE ability , *TECHNICAL specifications , *COMPUTER systems - Abstract
The article presents an overview of papers presented in the December 1974 issue of the periodical "acm computing surveys." All papers selected for publication in this issue cover a range of viewpoints about good programming. There is no fixed set of rules according to which clear, understandable and provable programs can be constructed. There are guidelines ofcourse but the individual programmer's style, his clarity of thought and creativity contribute significantly to the outcome. The first two papers in the issue deal with the relationship between better programming and environment.
- Published
- 1974
- Full Text
- View/download PDF
28. Concurrency and Distribution in Object-Oriented Programming.
- Author
-
Briot, Jean-Pierre, Guerraoui, Rachid, and Lohr, Klaus-Peter
- Subjects
- *
PROGRAMMING languages , *OBJECT-oriented programming , *COMPUTER programming , *RELIABILITY in engineering , *COMPUTER systems , *ELECTRONIC data processing - Abstract
This paper aims at discussing and classifying the various ways in which the object paradigm is used in concurrent and distributed contexts. We distinguish among the library approach, the integrative approach, and the reflective approach. The library approach applies object-oriented concepts, as they are, to structure concurrent and distributed systems through class libraries. The integrative approach consists of merging concepts such as object and activity, message passing, and transaction, etc. The reflective approach integrates class libraries intimately within an object-based programming language. We discuss and illustrate each of these and point out their complementary levels and goals. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
29. Cache Coherence in Large-Scale Shared-Memory Multiprocessors: Issues and Comparisons.
- Author
-
Lilja, David J.
- Subjects
- *
ADAPTIVE computing systems , *CACHE memory , *COMPUTER storage devices , *MULTIPROCESSORS , *COST , *ELECTRONIC data processing , *COMPUTER systems - Abstract
Due to data spreading among processors and due to the cache coherence problem, private data caches have not been as effective in reducing the average memory delay in multiprocessors as in uniprocessors. A wide variety of mechanisms have been proposed for maintaining cache coherence in large-scale shared-memory multiprocessors, making it difficult to compare their performance and implementation implications. To help the computer architect understand some of the trade-offs involved, this paper surveys current cache coherence mechanisms and identifies several issues critical to their design. These design issues include: (1) the coherence detection strategy, through which possibly incoherent memory accesses are detected either statically at compile-time, or dynamically at run-time; (2) the coherence enforcement strategy, such as updating or invalidating, used to ensure that stale cache entries are never referenced by a processor; (3) how the precision of block-sharing information can be changed to trade-off the implementation cost and performance of the coherence mechanism; and (4) how the cache block size affects the performance of the memory system. Trace-driven simulations are used to compare the performance and implementation impacts of these different issues. Additionally, hybrid strategies are presented that can enhance the performance of the multiprocessor memory system by combining several different coherence mechanisms into a single system. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
30. SURVEYOR'S FORUM.
- Author
-
Strigini, Lorenzo
- Subjects
- *
FAULT-tolerant computing , *PROLOG (Computer program language) , *FAULT tolerance (Engineering) , *COMPUTER systems - Abstract
Comments on the paper `Resourceful Systems for Fault Tolerance, Reliability and Safety,' by Russell J. Abbott. Response of Abbott to the comments; Inaccuracies in the introductory general considerations; Rating of the merits of Prolog as a language for designing robust computer systems; Importance of predictability of behavior when advocating the use of self-modifying systems to achieve flexibility.
- Published
- 1991
31. Distributed Operating Systems.
- Author
-
Tanenbaum, Andrew S. and van Renesse, Robbert
- Subjects
- *
DISTRIBUTED operating systems (Computers) , *COMPUTER networks , *COMPUTER operating systems , *COMPUTER systems , *SYSTEMS design , *COMPUTER software - Abstract
Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden.E [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
32. System Architectures for Computer Music.
- Author
-
Gordon, John W.
- Subjects
- *
COMPUTER music , *COMPUTER systems , *COMPUTER architecture , *HOUSEHOLD electronics , *COMPUTER network architectures , *ELECTRONIC data processing , *COMPUTERS , *HARDWARE - Abstract
Computer music is a relatively new field. While a large proportion of the public is aware of computer music in one form or another, there seems to be a need for a better understanding of its capabilities and limitations in terms of synthesis, performance, and recording hardware. This article addresses that need by surveying and discussing the architecture of existing computer music systems. System requirements vary according to what the system will be used for. Common uses for computer music systems include composition, performance, research, home entertainment, and studio recording/mixing. This paper outlines system components with this wide diversity of possible uses in mind. Current synthesis and analysis techniques, and the different way in which these techniques can be implemented in special-purpose hardware, are comprehensively reviewed. Design specifications are given for certain digital-to-analog (and analog-to- digital) converters, disk interfaces, system organization, control hardware and software, and numerical precision. Several synthesis systems are described in detail, with an emphasis on theoretical developments and innovative design. Commerical synthesizers and other architectures are also briefly mentioned. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
33. Research in Music and Artificial Intelligence.
- Author
-
Roads, Curtis
- Subjects
- *
ARTIFICIAL intelligence , *COMPUTER music , *COMPUTER systems , *MUSIC , *RESEARCH , *ELECTRONIC data processing , *COMPUTERS , *ELECTRONIC systems - Abstract
Although the boundaries of artificial intelligence (AI) remain elusive, computers can now perform musical tasks that were formerly associated exclusively with naturally intelligent musicians. After a historical note, this paper sermonizes on the need for A! techniques in four areas of musical research: composition, performance, music theory, and digital sound processing. The next part surveys recent work involving AI and music. The discussion concentrates on applications in the four areas of research just mentioned. The final part examines how Al techniques of planning and learning could be used to expand the knowledge base and enrich the behavior of musically intelligent systems. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
34. Guest Editor's Indroduction to the Special Issue on Computer Music.
- Author
-
Abbott, Curtis
- Subjects
- *
COMPUTER systems , *ELECTRONIC systems , *RESEARCH , *COMPUTER composition , *COMPUTER music , *PROGRAMMING languages , *HARDWARE , *INTERFACE circuits , *COMPUTER architecture - Abstract
The article presents an overview of the papers presented in the June 1985 issue of the periodical "acm computing surveys." The four articles included in this issue survey various research areas like computer architecture and programming languages. In the first article the author surveys research in music and artificial intelligence. The second and third articles concentrate on the system aspects of the field. The second article covers the hardware and signal processing requirements that drive system design, surveys various architectural alternatives that have evolved in response to these requirements. The third article considers the software side of these systems. The fourth article considers the important problem of computer based user interfaces for computer music. Research in computer music is rewarding for a number of reasons, among them the strong interdisciplinary flavor of the field, the technical and intellectual challenges it presents and the need for artistic judgment and sophistication.
- Published
- 1985
35. Highly Available Systems for Database Applications.
- Author
-
Won Kim
- Subjects
- *
COMPUTER software , *COMPUTER systems , *DESIGN , *SYSTEM analysis , *APPLICATION software , *TECHNICAL specifications - Abstract
As users entrust more and more of their applications to computer systems, the need for systems that are continuously operational (24 hours per day) has become even greater. This paper presents a survey and analysis of representative architectures and techniques that have been developed for constructing highly available systems for database applications. It then proposes a design of a distributed software subsystem that can serve as a unified framework for constructing database application systems that meet various requirements for high availability. [ABSTRACT FROM AUTHOR]
- Published
- 1984
- Full Text
- View/download PDF
36. Issues in Software Maintenance.
- Author
-
Lientz, Bennet P.
- Subjects
- *
SOFTWARE maintenance , *MANAGEMENT information systems , *COMPUTER software , *MAINTENANCE costs , *COMPUTER systems - Abstract
Until a few years ago the area of software maintenance was largely ignored. Interest has increased recently for three reasons. First, the increased number of systems, combined with the increased volume of enhancement and maintenance, has restricted resources available for new development. Second, there has been a growing awareness that tools and aids that assist in developing systems may have little effect on operational systems. Third, the management of information systems has come under increasing scrutiny in terms of costs and resource utilization. In this paper some of the major issues that surfaced during several extensive operational software studies are highlighted. These studies have raised significant questions about the roles of the users in operations and maintenance, the management of maintenance, and the kinds of tools and techniques that are needed for maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
37. Data-Driven and Demand-Driven Computer Architecture.
- Author
-
Treleaven, Philip C., Brownbridge, David R., and Hopkins, Richard P.
- Subjects
- *
COMPUTER architecture , *COMPUTER systems , *COMPUTER science , *COMPUTER input-output equipment , *COMPUTER software , *PROGRAMMING languages - Abstract
Novel data-driven and demand-driven computer architectures are under development in a large number of laboratories in the United States, Japan, and Europe. These computers are not based on the traditional von Neumann organization; instead, they are attempts to identify the next generation of computer. Basically, in data-driven (e.g., data-flow) computers the availability of operands triggers the execution of the operation to be performed on them, whereas in demand-driven (e.g., reduction) computers the requirement for a result triggers the operation that will generate it. Although there are these two distinct areas of research, each laboratory has developed its own individual model of computation, stored program representation, and machine organization. Across this spectrum of designs there is, however, a significant sharing of concepts. The aim of this paper is to identify the concepts and relationships that exist both within and between the two areas of research. It does this by examining data-driven and demand-driven architecture at three levels: computation organization, (stored) program organization, and machine organization. Finally, a survey of various novel computer architectures under development is given. [ABSTRACT FROM AUTHOR]
- Published
- 1982
- Full Text
- View/download PDF
38. Network Protocols.
- Author
-
Tanenbaum, Andrew S.
- Subjects
- *
COMPUTER network protocols , *COMPUTER networks , *DATA transmission systems , *ELECTRONIC data processing , *COMPUTER systems , *COMPUTER technical support - Abstract
During the last ten years, many computer networks have been designed, implemented, and put into service in the United States, Canada, Europe, Japan, and elsewhere. From the experience obtained with these networks, certain key design principles have begun to emerge, principles that can be used to design new computer networks in a more structured way than has traditionally been the case. Chief among these principles is the notion of structuring a network as a hierarchy of layers, each one built upon the previous one. This paper is a tutorial about such network hierarchies, using the Reference Model of Open Systems Interconnection developed by the International Organization for Standardization as a guide. Numerous examples are given to illustrate the principles. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
39. A Measurement Procedure for Queueing Network Models of Computer Systems.
- Author
-
Rose, Clifford A.
- Subjects
- *
QUEUING theory , *COMPUTER systems , *COMPUTER networks , *COMPUTER algorithms , *COMPUTER monitors , *MOTHERBOARDS , *COMPUTER operating systems - Abstract
This tutorial paper describes a procedure for obtaining input parameter values and output performance measures for a popular class of queueing network models. The procedure makes use of current measurement monitors as much as possible. We survey the two basic approaches to monitoring computer systems (event trace and sampling) and the three types of monitors (hardware, software, and hybrid). Also surveyed are measurement tools for the analytical modeling of several current families of computer systems. We discuss in detail examples of model validations and performance predictions to illustrate the measurement procedures and the class of models. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
40. Approximate Methods for Analyzing Oueueing Network Models of Computing Systems.
- Author
-
Chandy, K. Mani and Sauer, Charles H.
- Subjects
- *
QUEUING theory , *COMPUTER systems , *COMPUTER simulation , *MOTHERBOARDS , *MULTIPROCESSORS , *COMPUTER networks - Abstract
The two primary issues in choosing a computing system model are credibility of the model and cost of developing and solving the model. Credibility is determined by 1) the experience and biases of the persons using the model, 2) the extent to which the model represents system features, and 3) the accuracy of the solution technique. Queueing network models are widely used because they have proven effective and are inexpensive to solve. However, most queueing network models make strong assumptions to assure an exact numerical solution. When such assumptions severely affect credibility, simulation or other approaches are used, in spite of their relatively high cost. It is the contention of this paper that queueing network models with credible assumptions can be solved approximately to provide credible performance estimates at low cost. This contention is supported by examples of approximate solutions of queueing network models. Two major approaches to approximate solution, aggregation (decomposition) and diffusion, are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
41. Oueueing Networks: A Critique of the State of the Art and Directions for the Future.
- Author
-
Muntz, Richard R.
- Subjects
- *
QUEUING theory , *COMPUTER systems , *COMPUTER networks , *COMPUTER simulation , *SYSTEMS theory , *MOTHERBOARDS - Abstract
Since the early 1970s, the application of queueing network models to computer system performance analysis has generated considerable interest. The period since has seen considerable progress in advances both in the fundamental theory and in practical experience. Other papers in this special issue have surveyed the state of the art. This note attempts to provide a condensed statement as to where we are and, as much as possible, to indicate areas of future research with high potential. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
42. The Operational Analysis of Oueueing Network Models.
- Author
-
Denning, Peter J. and Buzen, Jeffrey P.
- Subjects
- *
QUEUING theory , *HYBRID computer simulation , *COMPUTER systems , *COMPUTER algorithms , *COMPUTER networks , *MOTHERBOARDS , *REACTION time - Abstract
Queueing network models have proved to be cost effective tools for analyzing modern computer systems. This tutorial paper presents the basic results using the operational approach, a framework which allows the analyst to test whether each assumption is met in a given system. The early sections describe the nature of queueing network models and their applications for calculating and predicting performance quantities. The basic performance quantities-such as utilizations, mean queue lengths, and mean response times-are defined, and operational relationships among them are derived. Following this, the concept of job flow balance is introduced and used to study asymptotic throughputs and response times. The concepts of state transition balance, one-step behavior, and homogeneity are then used to relate the proportions of time that each system state is occupied to the parameters of job demand and to device characteristics. Efficient methods for computing basic performance quantities are also described. Finally the concept of decomposition is used to simplify analyses by replacing subsystems with equivalent devices. All concepts are illustrated liberally with examples. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
43. Cost-Benefit Analysis in Information Systems Development and Operation.
- Author
-
King, John Leslie and Schrems, Edward L.
- Subjects
- *
COST effectiveness , *COST analysis , *COMPUTER systems , *INFORMATION resources management , *ASSOCIATIONS, institutions, etc. , *COMPUTER users - Abstract
Cost-benefit analysis of computer-based information systems is a major concern of managers in public and private organizations using computers. This paper introduces and reviews basic elements of cost-benefit analysis as applied to computerized information systems, and provides discussion of the major problems to he avoided. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
44. Positive Experiences with a Multiprocessing System.
- Author
-
Srodawa, Ronald J.
- Subjects
- *
MULTIPROCESSORS , *COMPUTER operating systems , *COMPUTER systems , *COMPUTER simulation , *PORTS (Electronic computer system) - Abstract
Encouraging experiences gained from the Michigan Terminal System (MTS) are reported as a response to a recent survey of multiprocessing organizations by Enslow [7]. This paper discusses the importance of the degree of symmetry within multiport system organizations, the ease of implementation experienced with the symmetrically organized operating system nucleus for MTS, throughput statistics for MTS which are better than those generally documented in the literature, and underlying reasons that might account for the impressive statistics. Since some of these results are at variance with the conventional wisdom concerning multiprocessor systems, a strong recommendation is made for additional modeling and measurement analysis based upon these experiences. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
45. A Straightforward Model for Computer Performance Prediction.
- Author
-
Boyse, John W. and Warn, David R.
- Subjects
- *
COMPUTER simulation , *MATHEMATICAL models , *ELECTROMECHANICAL analogies , *SIMULATION methods & models , *COMPUTER systems , *ELECTRONIC systems - Abstract
Both simulation and analytic models of computer systems can be very useful for predicting the performance of proposed new systems or proposed changes to existing systems. Unfortunately, many potential users of models are reluctant to use them because of the complexity of many such models and the difficulty of relating the model to the real system. This tutorial paper leads the reader through the development and use of an easily understood analytic model. This is then placed in context with a class of similar analytic models. In spite of the simplicity of these models they have proved useful and quite accurate in predicting performance (utilization, throughput, and response) using only the most basic system data as input. These parameters can either be estimates or measurements from a running system. The model equations and assumptions are defined, and a detailed case study is presented as an example of their use. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
46. Data Communication Control Procedures.
- Author
-
Stutzman, Byron W.
- Subjects
- *
DATA transmission systems , *COMPUTER networks , *TELECOMMUNICATION , *COMPUTER systems , *TELECOMMUNICATION systems , *ELECTRONIC data processing - Abstract
This paper is a tutorial on the methods used to control the transmission of digital information on data communication links. Simple models of data communication systems are introduced and terminology for describing their functions and operation is established. Various graphical methods of representing communication control procedures are discussed and used to describe significant features of communication control procedures in detail. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
47. Linkers and Loaders.
- Author
-
Presser, Leon and White, John R.
- Subjects
- *
LOADERS (Computer programs) , *LINKERS (Computer programs) , *COMPILERS (Computer programs) , *LINKING loaders (Computer programs) , *COMPUTER software , *COMPUTER systems - Abstract
This is a tutorial paper on the linking and loading stages of the language transformation process. First, loaders are classified and discussed. Next, the linking process is treated in terms of the various times at which it may occur (i.e., binding to logical space). Finally, the linking and loading functions are explained in detail through a careful examination of their implementation in the IBM System/360. Examples are presented, and a number of possible system trade-offs are pointed out. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
48. An Unclever Time-Sharing System.
- Author
-
Foster, Caxton C.
- Subjects
- *
TIME-sharing computer systems , *COMPUTER systems , *COMPUTER multitasking , *REMOTE access networks , *COMPUTER input-output equipment , *INTERRUPTS (Computer systems) - Abstract
This paper describes the internal structure of a time-sharing system in some detail. This system is dedicated to providing remote access, and has a simple file structure. It is intended for use in a university type environment where there are many short jobs that will profit from one- or two-second turnaround. Despite its simplicity, this system can serve as a useful introduction to the problems encountered by the designers of any time-sharing system. Included are a discussion of the command language, the hardware organization toward which the design is oriented, the general internal organization, the command sequences, the CPU scheduler, handling of interrupts, the assignment of core space, execution and control of the user's program, backup storage management, and the handling of errors. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
49. The Computer in the Humanities and Fine Arts.
- Author
-
Sedelow, Sally Yeates
- Subjects
- *
COMPUTER systems , *HUMANITIES , *ART , *INFORMATION retrieval , *ARCHITECTURE - Abstract
This paper surveys the use of the computer in the humanities and fine arts, and indicates the relevance of available hardware and software for those applications areas. The first section covers (a) pattern recognition and analysis in art, architecture, music, and literature (with an emphasis upon the latter), and (b) pattern construction, or synthesis, in art, architecture, music, and language. In the second section, data representation and manipulation are described with reference to: first, transformation of the artifact into a form suitable for the computer; second, internal data storage and manipulation; and, third, output. The three types of artifacts dealt with are: aural, or auditory; visual, including both nonalphameric and alphameric forms; tactile. A discussion of data structures and of data manipulation occupies the subsection on internal data storage and manipulation, and the brief commentary on output concentrates on aural and visual modes. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
50. Von Neumann's First Computer Program.
- Author
-
Knuth, Donald E.
- Subjects
- *
COMPUTER software , *COMPUTER systems , *FILES (Records) , *COMPUTER industry - Abstract
An analysis of the two earliest sets of instruction codes planned for stored program computers, and the earliest extant program for such a computer, gives insight into the thoughts of John yon Neumann, the man who designed the instruction sets and wrote the program, and shows how several important aspects of computing have evolved. The paper is based on previously unpublished documents from the files of Herman H. Goldstine. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.