16 results on '"Database theory"'
Search Results
2. Interval-based temporal functional dependencies: specification and verification
- Author
-
Pietro Sala and Carlo Combi
- Subjects
Allen’s relations ,Temporal functional dependencies ,Relation (database) ,Computer science ,Applied Mathematics ,Temporal functional dependencies, Temporal databases, Interval-based functional dependencies, Compass structures, B-trees, Allen’s relations ,Transaction time ,Compass structures ,computer.software_genre ,Temporal databases ,B-trees ,Temporal database ,Dependency theory (database theory) ,Valid time ,Artificial Intelligence ,Database theory ,Data mining ,Tuple ,Functional dependency ,Interval-based functional dependencies ,computer - Abstract
In the temporal database literature, every fact stored in a database may be equipped with two temporal dimensions: the valid time, which describes the time when the fact is true in the modeled reality, and the transaction time, which describes the time when the fact is current in the database and can be retrieved. Temporal functional dependencies (TFDs) add valid time to classical functional dependencies (FDs) in order to express database integrity constraints over the flow of time. Currently, proposals dealing with TFDs adopt a point-based approach, where tuples hold at specific time points, to express integrity constraints such as "for each month, the salary of an employee depends only on his role". To the best of our knowledge, there are no proposals dealing with interval-based temporal functional dependencies (ITFDs), where the associated valid time is represented by an interval and there is the need of representing both point-based and interval-based data dependencies. In this paper, we propose ITFDs based on Allen's interval relations and discuss their expressive power with respect to other TFDs proposed in the literature: ITFDs allow us to express interval-based data dependencies, which cannot be expressed through the existing point-based TFDs. ITFDs allow one to express constraints such as "employees starting to work the same day with the same role get the same salary" or "employees with a given role working on a project cannot start to work with the same role on another project that will end before the first one". Furthermore, we propose new algorithms based on B-trees to efficiently verify the satisfaction of ITFDs in a temporal database. These algorithms guarantee that, starting from a relation satisfying a set of ITFDs, the updated relation still satisfies the given ITFDs.
- Published
- 2013
- Full Text
- View/download PDF
3. Disjunctive databases for representing repairs
- Author
-
Jan Chomicki, Cristian Molinaro, and Jerzy Marcinkowski
- Subjects
FOS: Computer and information sciences ,Class (set theory) ,Database ,Computer science ,Applied Mathematics ,Existential quantification ,Database schema ,InformationSystems_DATABASEMANAGEMENT ,Databases (cs.DB) ,02 engineering and technology ,Minimal models ,computer.software_genre ,Set (abstract data type) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Computer Science - Databases ,Artificial Intelligence ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Database theory ,Functional dependency ,computer - Abstract
This paper addresses the problem of representing the set of repairs of a possibly inconsistent database by means of a disjunctive database. Specifically, the class of denial constraints is considered. We show that, given a database and a set of denial constraints, there exists a (unique) disjunctive database, called canonical, which represents the repairs of the database w.r.t. the constraints and is contained in any other disjunctive database with the same set of minimal models. We propose an algorithm for computing the canonical disjunctive database. Finally, we study the size of the canonical disjunctive database in the presence of functional dependencies for both subset-based repairs and cardinality-based repairs.
- Published
- 2009
- Full Text
- View/download PDF
4. On the security of individual data
- Author
-
D. Miklós, Gyula O. H. Katona, and János Demetrovics
- Subjects
Set (abstract data type) ,Discrete mathematics ,Task (computing) ,Theoretical computer science ,Artificial Intelligence ,Computer science ,Applied Mathematics ,Individual data ,Database theory ,Upper and lower bounds ,Original data - Abstract
We will consider the following problem in this paper: Assume that there are $$n$$ numerical data $$\{x_1,x_2,\ldots,x_n\}$$ (like salaries of $$n$$ individuals) stored in a database and some subsums of these numbers are made public or just available for persons not eligible to learn the original data. Our motivating question is: At most how many of these subsums may be disclosed such that none of the numbers $$x_1,x_2,\ldots,x_n$$ can be uniquely determined from these sums. These types of problems arise in the cases when certain tasks concerning a database are done by subcontractors who are not eligible to learn the elements of the database, but naturally should be given some data to fulfill there task. In database theory such examples are called statistical databases as they are used for statistical purposes and no individual data are supposed to be obtained using a restricted list of SUM queries. This problem was originally introduced by [1], originally solved by Miller et al. [7] and revisited by Griggs [4, 5]. It was shown in [7] that no more than $${n\choose n/2}$$ subsums of a given set of secure data may be disclosed without disclosing at least one of the data, which upper bound is sharp as well. To calculate a subsum, it might need some operations whose number is limited. This is why it is natural to assume that the disclosed subsums of the original elements of the database will contain only a limited number of elements, say at most $$k$$ . The goal of the present paper is to determine the maximum number of subsums of size at most $$k$$ which can be disclosed without making possible to calculate any of the individual data $$x_i$$ . The maximum is exactly determined for the case when the number of data is much larger than the size restriction $$k$$ .
- Published
- 2006
- Full Text
- View/download PDF
5. [Untitled]
- Author
-
Krisztián Tichler
- Subjects
Combinatorics ,Class (set theory) ,Matrix (mathematics) ,Cardinality ,Artificial Intelligence ,Applied Mathematics ,Database theory ,Tree (set theory) ,Element (category theory) ,Representation (mathematics) ,Row ,Mathematics - Abstract
We say, that a subset K of the columns of a matrix is called a key, if every two rows that agree in the columns of K agree also in all columns of the matrix. A matrix represents a Sperner system K, if the system of minimal keys is exactly K. It is known, that such a representation always exists. In this paper we show, that the maximum of the minimum number of rows, that are needed to represent a Sperner system of only two element sets is 3(n/3+o(n)). We consider this problem for other classes of Sperner systems (e.g., for the class of trees, i.e. each minimal key has cardinality two, and the keys form a tree), too. The concept of keys plays an important role in database theory.
- Published
- 2004
- Full Text
- View/download PDF
6. [Untitled]
- Author
-
Leopoldo Bertossi and Camilla Schwind
- Subjects
Basis (linear algebra) ,Database ,Computer science ,Relational database ,View ,Applied Mathematics ,Database schema ,InformationSystems_DATABASEMANAGEMENT ,computer.software_genre ,Set (abstract data type) ,Artificial Intelligence ,Data integrity ,Database theory ,Closing (morphology) ,computer - Abstract
In this article, we characterize in terms of analytic tableaux the repairs of inconsistent relational databases, that is databases that do not satisfy a given set of integrity constraints. For this purpose we provide closing and opening criteria for branches in tableaux that are built for database instances and their integrity constraints. We use the tableaux based characterization as a basis for consistent query answering, that is for retrieving from the database answers to queries that are consistent with respect to the integrity constraints.
- Published
- 2004
- Full Text
- View/download PDF
7. [Untitled]
- Author
-
Bart Kuijpers, Peter Z. Revesz, Jan Chomicki, and Sofie Haesevoets
- Subjects
Discrete mathematics ,Theoretical computer science ,Artificial Intelligence ,Applied Mathematics ,Geometric transformation ,Closure (topology) ,Database theory ,Function (mathematics) ,Extension (predicate logic) ,Data model (GIS) ,Object (computer science) ,Data modeling ,Mathematics - Abstract
We present a data model for spatio-temporal databases. In this model spatio-temporal data is represented as a finite union of objects described by means of a spatial reference object, a temporal object and a geometric transformation function that determines the change or movement of the reference object in time. We define a number of practically relevant classes of spatio-temporal objects, and give complete results concerning closure under Boolean set operators for these classes. Since only few classes are closed under all set operators, we suggest an extension of the model, which leads to better closure properties, and therefore increased practical applicability. We also discuss a normal form for this extended data model.
- Published
- 2003
- Full Text
- View/download PDF
8. Reasoning on temporal class diagrams: Undecidability results
- Author
-
Artale, Alessandro
- Published
- 2006
- Full Text
- View/download PDF
9. Classes of Spatio-Temporal Objects and their Closure Properties
- Author
-
Chomicki, Jan, Haesevoets, Sofie, Kuijpers, Bart, and Revesz, Peter
- Published
- 2003
- Full Text
- View/download PDF
10. [Untitled]
- Author
-
Domenico Aquilino, F. Turini, Patrizia Asirelli, and Chiara Renso
- Subjects
Correctness ,Database ,Programming language ,Semantics (computer science) ,Applied Mathematics ,Complex system ,computer.software_genre ,Operator (computer programming) ,Artificial Intelligence ,Order (business) ,Database theory ,computer ,Negation as failure ,Mathematics - Abstract
An operation for restricting deductive databases represented as logic programs is introduced. The constraints are coded in a separate database, and the operator puts the two databases together in order to provide a restricted view of the original database. The operator is given a semantics in terms of the immediate consequence operator. Then a transformational implementation is given and its correctness is proved with respect to the abstract semantics. The approach is presented at first for positive programs and it is then extended to take negation as failure into account.
- Published
- 1997
- Full Text
- View/download PDF
11. [Untitled]
- Author
-
Fosca Giannotti, Sergio Greco, Domenico Saccà, and Carlo Zaniolo
- Subjects
Database ,Programming language ,Computer science ,Functional logic programming ,Applied Mathematics ,Second-generation programming language ,Ontology language ,computer.software_genre ,Datalog ,Artificial Intelligence ,Well-founded semantics ,Database theory ,Fifth-generation programming language ,computer ,Logic programming ,computer.programming_language - Abstract
While non-determinism has long been established as a key concept in logic pro-gramming, its importance in the context of deductive databases was recognized only recently. This paper provides an overview of recent results on this topic with the aim of providing an introduction to the theory and practice of non-determinism in deductive databases. In particular we (i) recall the main results linking non-deterministic constructs in database languages to the theory of data complexity and the expressibility hierarchy of query languages; (ii) provide a reasoned introduction to effective programming with non-deterministic constructs; (iii) compare the usage of non-deterministic constructs in languages such as LDL++ to that of traditional logic programming languages; (iv) discuss the link between the semantics of logic programs with non-deterministic constructs and the stable-model semantics of logic programs with negation.
- Published
- 1997
- Full Text
- View/download PDF
12. A deontic approach to database integrity
- Author
-
Karen L. Kwast
- Subjects
Artificial neural network ,Programming language ,Computer science ,Applied Mathematics ,Deontic logic ,Complex system ,computer.software_genre ,Set (abstract data type) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Operator (computer programming) ,Artificial Intelligence ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Computer Science::Logic in Computer Science ,Data integrity ,Database theory ,Kripke semantics ,Data mining ,computer ,Computer Science::Databases - Abstract
The logical theory of database integrity is developed as an application of deontic logic. After a brief introduction to database theory and Kripke semantics, a deontic operator X, denoting “what should hold, non-trivially, given a set of constraints”, is defined and axiomatized. The theory is applied to updates, to dynamic constraints and to databases extended with nulls.
- Published
- 1993
- Full Text
- View/download PDF
13. Foundations of entity-relationship modeling
- Author
-
Bernhard Thalheim
- Subjects
Computer science ,Programming language ,View ,Applied Mathematics ,Semi-structured model ,Database schema ,InformationSystems_DATABASEMANAGEMENT ,computer.software_genre ,Database design ,Conceptual schema ,Artificial Intelligence ,Entity–relationship model ,Database theory ,Data mining ,computer ,Database model - Abstract
Database design methodologies should facilitate database modeling, effectively support database processing, and transform a conceptual schema of the database to a high-performance database schema in the model of the corresponding DBMS. The Entity-Relationship Model is extended to theHigher-orderEntity-RelationshipModel (HERM) which can be used as a high-level, simple and comprehensive database design model for the complete database information on the structure, operations, static and dynamic semantics. The model has the expressive power of semantic models and possesses the simplicity of the entity-relationship model. The paper shows that the model has a well-founded semantics. Several semantical constraints are considered for this model.
- Published
- 1993
- Full Text
- View/download PDF
14. Characterization of desirable properties of general database decompositions
- Author
-
Hegner, Stephen J.
- Published
- 1993
- Full Text
- View/download PDF
15. Incomplete deductive databases
- Author
-
Tomasz Imielinski
- Subjects
Database ,Applied Mathematics ,InformationSystems_DATABASEMANAGEMENT ,Extension (predicate logic) ,computer.software_genre ,Undecidable problem ,Artificial Intelligence ,Complete information ,Database theory ,Conjunctive query ,Completeness (statistics) ,computer ,Time complexity ,Boolean conjunctive query ,Mathematics - Abstract
We investigate the complexity of query processing in databases which have both incompletely specified data and deductive rules. The paper is divided into two parts: in the first we consider databases in which incompletely specified data occurs only in the database intension; in the second we consider databases in which incomplete information is represented only in database extension. We prove that, in general, the query processing problem for databases with incomplete intensions is undecidable. A number of classes of rules for which all conjunctive queries can be processed in polynomial time is then characterized. For databases with incomplete extensions we prove a number of CoNP completeness results. For instance, we demonstrate that processing disjunctions which are restricted to individual columns of database predicates can, in general, be as bad as processing arbitrary disjunctions (i.e. CoNP-complete). This falsifies the conjecture that such limited disjunctions could be computationally beneficial. We also show two simple examples of situations in which query processing is guaranteed to be polynomial. These situations are linked to certain assumptions about database updates.
- Published
- 1991
- Full Text
- View/download PDF
16. Erratum for: A Study of Homogeneity in Relational Databases [Annals of Mathematics and Artificial Intelligence 33(2) (2001) 379–414]
- Author
-
José María Turull Torres
- Subjects
Theoretical computer science ,business.industry ,Computer science ,Relational database ,Applied Mathematics ,computer.software_genre ,Database design ,Relational calculus ,Artificial Intelligence ,Codd's theorem ,Relational model ,Database theory ,Conjunctive query ,Artificial intelligence ,Domain relational calculus ,business ,computer ,Natural language processing - Published
- 2004
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.