113 results on '"Kemper, Alfons"'
Search Results
2. Chapter 4: Systems: 4.5: Other Systems.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
DATA logging ,DATABASE management ,CLIENT/SERVER computing ,DATA recovery ,BACK up systems - Published
- 2016
- Full Text
- View/download PDF
3. Chapter 4: Systems: 4.4: SAP HANA.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
ENTERPRISE resource planning software ,QUERY (Information retrieval system) ,DATABASE management ,OLAP technology ,SQL - Published
- 2016
- Full Text
- View/download PDF
4. Chapter 4: Systems: 4.2: H-Store and VoltDB.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
DATABASE management ,ONLINE data processing ,COMPUTER storage devices ,SQL ,APPLICATION software - Published
- 2016
- Full Text
- View/download PDF
5. Chapter 4: Systems: 4.1: SQL Server Hekaton.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
SQL ,COMPUTER storage devices ,DATABASE management ,REAL-time computing ,DATA analytics - Published
- 2016
- Full Text
- View/download PDF
6. Chapter 3: Issues and Architectural Choices: 3.5: Query Processing and Compilation.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
QUERY (Information retrieval system) ,DATABASE management ,COMPUTER storage devices ,COMPILERS (Computer programs) ,COMPUTER programming - Published
- 2016
- Full Text
- View/download PDF
7. Chapter 3: Issues and Architectural Choices: 3.2: Indexing.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
CENTRAL processing units ,CACHE memory ,PARALLEL computers ,DATABASE management ,MULTICORE processors - Published
- 2016
- Full Text
- View/download PDF
8. Chapter 3: Issues and Architectural Choices: 3.1: Data Organization and Layout.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
COMPUTER storage devices ,DATABASE management ,COMPUTER architecture ,HARD disks ,ELECTRONIC data processing - Published
- 2016
- Full Text
- View/download PDF
9. Chapter 2: History and Trends: 2.1: History.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
IMS (DL/I) (Computer system) ,COMPUTER storage devices ,DATABASE management ,COMPUTER engineering - Published
- 2016
- Full Text
- View/download PDF
10. Chapter 1: Introduction.
- Author
-
Faerber, Franz, Kemper, Alfons, Larson, Per-Åke, Levandoski, Justin, Neumann, Thomas, and Pavlo, Andrew
- Subjects
COMPUTER storage devices ,DATABASE management ,ELECTRONIC data processing ,TELECOMMUNICATION ,MULTICORE processors ,CENTRAL processing units - Published
- 2016
- Full Text
- View/download PDF
11. Deferred Maintenance of Disk-Based Random Samples.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Gemulla, Rainer, and Lehner, Wolfgang
- Abstract
Random sampling is a well-known technique for approximate processing of large datasets. We introduce a set of algorithms for incremental maintenance of large random samples on secondary storage. We show that the sample maintenance cost can be reduced by refreshing the sample in a deferred manner. We introduce a novel type of log file which follows the intuition that only a "sample" of the operations on the base data has to be considered to maintain a random sample in a statistically correct way. Additionally, we develop a deferred refresh algorithm which updates the sample by using fast sequential disk access only, and which does not require any main memory. We conducted an extensive set of experiments and found, that our algorithms reduce maintenance cost by several orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
12. ArHeX: An Approximate Retrieval System for Highly Heterogeneous XML Document Collections.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Sanz, Ismael, Mesiti, Marco, Guerrini, Giovanna, and Llavori, Rafael Berlanga
- Abstract
Handling the heterogeneity of structure and/or content of XML documents for the retrieval of information is a fertile field of research nowadays. Many efforts are currently devoted to identifying approximate answers to queries that require relaxation on conditions both on the structure and the content of XML documents [1,2,4,5]. Results are ranked relying on score functions that measure their quality and relevance and only the top-k returned. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
13. Data Mapping as Search.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Fletcher, George H.L., and Wyss, Catharine M.
- Abstract
In this paper, we describe and situate the system for data mapping in relational databases. Automating the discovery of mappings between structured data sources is a long standing and important problem in data management. Starting from user provided example instances of the source and target schemas, approaches mapping discovery as search within the transformation space of these instances based on a set of mapping operators. mapping expressions incorporate not only data-metadata transformations, but also simple and complex semantic transformations, resulting in significantly wider applicability than previous systems. Extensive empirical validation of , both on synthetic and real world datasets, indicates that the approach is both viable and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
14. An Extensible, Distributed Simulation Environment for Peer Data Management Systems.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Hose, Katja, Job, Andreas, Karnstedt, Marcel, and Sattler, Kai-Uwe
- Abstract
Peer Data Management Systems (PDMS) have recently attracted attention by the database community. One of the main challenges of this paradigm is the development and evaluation of indexing and query processing strategies for large-scale networks. So far, research groups working in this area build their own testing environment which first causes a huge effort and second makes it difficult to compare different strategies. In this demonstration paper, we present a simulation environment that aims to be an extensible platform for experimenting with query processing techniques in PDMS and allows for running large simulation experiments in distributed environments such as workstation clusters or even PlanetLab. In the demonstration we plan to show the evaluation of processing strategies for queries with specialized operators like top-k and skyline computation on structured data. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
15. STRIDER: A Versatile System for Structural Disambiguation.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Mandreoli, Federica, Martoglia, Riccardo, and Ronchetti, Enrico
- Abstract
We present STRIDERSTRucture-based Information Disambiguation ExpeRt., a versatile system for the disambiguation of structure-based information like XML schemas, structures of XML documents and web directories. The system performs high-quality fully-automated disambiguation by exploiting a novel and versatile structural disambiguation approach. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
16. MonetDB/XQuery—Consistent and Efficient Updates on the Pre/Post Plane.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Boehm, Christian, Boncz, Peter, Flokstra, Jan, Grust, Torsten, Keulen, Maurice, Manegold, Stefan, Mullender, Sjoerd, Rittinger, Jan, and Teubner, Jens
- Abstract
Relational XQuery processors aim at leveraging mature relational DBMS query processing technology to provide scalability and efficiency. To achieve this goal, various storage schemes have been proposed to encode the tree structure of XML documents in flat relational tables. Basically, two classes can be identified: (1) encodings using fixed-length surrogates, like the preorder ranks in the pre/post encoding [5] or the equivalent pre/size/level encoding [8], and (2) encodings using variable-length surrogates, like, e.g., ORDPATH [9] or P-PBiTree [12]. Recent research [1] showed a clear advantage of the former for efficient evaluation of XPath location steps, exploiting techniques like cheap node order tests, positional lookup, and node skipping in staircase join [7]. However, once updates are involved, variable-length surrogates are often considered the better choice, mainly as a straightforward implementation of structural XML updates using fixed-length surrogates faces two performance bottlenecks: (i) high physical cost (the preorder ranks of all nodes following the update position must be modified—on average 50% of the document), and (ii) low transaction concurrency (updating the size of all ancestor nodes causes lock contention on the document root). [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
17. : Visualization of the for Nearest Neighbor Queries.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Achtert, Elke, and Schwald, Dominik
- Abstract
Many different index structures have been proposed for spatial databases to support efficient query processing. However, most of these index structures suffer from an exponential dependency in processing time upon the dimensionality of the data objects. Due to this fact, an alternative approach for query processing on high-dimensional data is simply to perform a sequential scan over the entire data set. This approach often yields in lower I/O costs than using a multi-dimensional index. The combines these two techniques and optimizes the number and order of blocks which are processed in a single chained I/O operation. In this demonstration we present a tool called which visualizes the single I/O operations during a while processing a nearest neighbor query. assists the development and evaluation of new cost models for the by providing user significant information about the applied page access strategy in each step of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
18. Querying Mediated Geographic Data Sources.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Essid, Mehdi, Colonna, François-Marie, Boucelma, Omar, and Betari, Abdelkader
- Abstract
With the proliferation of geographic data and resources over the Internet, there is an increasing demand for integration services that allow a transparent access to massive repositories of heterogeneous spatial data. Recent initiatives such as Google Earth are likely to encourage other companies or state agencies to publish their (satellite) data over the Internet. To fulfill this demand, we need at minimum an efficient geographic integration system. The goal of this demonstration is to show some new and enhanced features of the VirGIS geographic mediation system. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
19. The SIRUP Ontology Query API in Action.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Ziegler, Patrick, Sturm, Christoph, and Dittrich, Klaus R.
- Abstract
Ontology languages to represent ontologies exist in large numbers, and users who want to access or reuse ontologies can often be confronted with a language they do not know. Therefore, ontology languages are nowadays themselves a source of heterogeneity. In this demo, we present the SIRUP Ontology Query API (SOQA) [5] that has been developed for the SIRUP approach to semantic data integration [4]. SOQA is an ontology language independent Java API for query access to ontological metadata and data that can be represented in a variety of ontology languages. In addition, we demonstrate two applications that are based on SOQA: The SOQA Browser, a tool to graphically inspect all ontology information that can be accessed through SOQA, and SOQA-QL, an SQL-like query language that supports declarative queries against ontological metadata and data. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
20. iMONDRIAN: A Visual Tool to Annotate and Query Scientific Databases.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Geerts, Floris, Kementsietsidis, Anastasios, and Milano, Diego
- Abstract
We demonstrate iMONDRIAN, a component of the MONDRIAN annotation management system. Distinguishing features of MONDRIAN are (i) the ability to annotate sets of values (ii) the annotation-aware query algebra. On top of that, iMONDRIAN offers an intuitive visual interface to annotate and query scientific databases. In this demonstration, we consider Gene Ontology (GO), a publicly available biological database. Using this database we show (i) the creation of annotations through the visual interface (ii) the ability to visually build complex, annotation-aware, queries (iii) the basic functionality for tracking annotation provenance. Our demonstration also provides a cheat window which shows the system internals and how visual queries are translated to annotation-aware algebra queries. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
21. MUSCLE: Music Classification Engine with User Feedback.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Brecheisen, Stefan, Kriegel, Hans-Peter, Kunath, Peter, Pryakhin, Alexey, and Vorberger, Florian
- Abstract
Nowadays, powerful music compression tools and cheap mass storage devices have become widely available. This allows average consumers to transfer entire music collections from the distribution medium, such as CDs and DVDs, to their computer hard drive. To locate specific pieces of music, they are usually labeled with artist and title. Yet the user would benefit from a more intuitive organization based on music style to get an overview of the music collection. We have developed a novel tool called MUSCLE which fills this gap. While there exist approaches in the field of musical genre classification, none of them features a hierarchical classification in combination with interactive user feedback and a flexible multiple assignment of songs to classes. In this paper, we present MUSCLE, a tool which allows the user to organize large music collections in a genre taxonomy and to modify class assignments on the fly. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
22. SAT: Spatial Awareness from Textual Input.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Kalashnikov, Dmitri V., Ma, Yiming, Mehrotra, Sharad, Hariharan, Ramaswamy, Venkatasubramanian, Nalini, and Ashish, Naveen
- Abstract
Recent events (WTC attacks, Southeast Asia Tsunamis, Hurricane Katrina, London bombings) have illustrated the need for accurate and timely situational awareness tools in emergency response. Developing effective situational awareness (SA) systems has the potential to radically improve decision support in crises by improving the accuracy and reliability of the information available to the decision-makers. In an evolving crisis, raw situational information comes from a variety of sources in the form of situational reports, live radio transcripts, sensor data, video streams. Much of the data resides (or can be converted) in the form of free text, from which events of interest are extracted. Spatial or location information is one of the fundamental attributes of the events, and is useful for a variety of situational awareness (SA) tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
23. XQueryViz: An XQuery Visualization Tool.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Boulos, Jihad, Karam, Marcel, Koteiche, Zeina, and Ollaic, Hala
- Abstract
We present in this demo the description of XQueryViz: an XQuery visualization tool. This graphical tool can parse one or more XML documents and/or schemas and visualizes them as trees with zooming, expansion and contraction functionality. The tool can also parse a textual XQuery and visualizes it as a DAG within two different windows: the first for the querying part (i.e. For-Let-Where clauses) and the second for the "Return" clause. More importantly, users can build XQuery queries with this graphical tool by pointing and clicking on the visual XML trees to build the XPath parts of an XQuery and then build the whole XQuery using visual constructs and connectors. A textual XQuery is then generated. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
24. VICO: Visualizing Connected Object Orderings.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Brecheisen, Stefan, Kriegel, Hans-Peter, Schubert, Matthias, and Gruber, Michael
- Abstract
In modern databases, complex objects like multimedia data, proteins or text objects can be modeled in a variety of representations and can be decomposed into multiple instances of simpler sub-objects. The similarity of such complex objects can be measured by a variety of distance functions. Thus, it quite often occurs that we have multiple views on the same set of data objects and do not have any intuition about how the different views agree or disagree about the similarity of objects. VICO is a tool that allows a user to interactively compare these different views on the same set of data objects. Our system is based on OPTICS, a density-based hierarchical clustering algorithm which is quite insensitive to the choice of parameters. OPTICS describes a clustering as a so-called cluster order on a data set which can be considered as an image of the data distribution. The idea of VICO is to compare the position of data objects or even complete clusters in a set of data spaces by highlighting them in various OPTICS plots. Therefore, VICO allows even non-expert users to increase the intuitive understanding of feature spaces, distance functions and object decompositions. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
25. TQuEST: Threshold Query Execution for Large Sets of Time Series.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Aßfalg, Johannes, Kriegel, Hans-Peter, Kröger, Peer, Kunath, Peter, Pryakhin, Alexey, and Renz, Matthias
- Abstract
Effective and efficient data mining in time series databases is essential in many application domains as for instance in financial analysis, medicine, meteorology, and environmental observation. In particular, temporal dependencies between time series are of capital importance for these applications. In this paper, we present TQuEST, a powerful query processor for time series databases. TQuEST supports a novel but very useful class of queries which we call threshold queries. Threshold queries enable searches for time series whose values are above a user defined threshold at certain time intervals. Example queries are "report all ozone curves which are above their daily mean value at the same time as a given temperature curve exceeds " or "report all blood value curves from patients whose values exceed a certain threshold one hour after the new medication was taken". TQuEST is based on a novel representation of time series which allows the query processor to access only the relevant parts of the time series. This enables an efficient execution of threshold queries. In particular, queries can be readjusted with interactive response times. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
26. X-Evolution: A System for XML Schema Evolution and Document Adaptation.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Mesiti, Marco, Celle, Roberto, Sorrenti, Matteo A., and Guerrini, Giovanna
- Abstract
The structure of XML documents, expressed as XML schemas [6], can evolve as well as their content. Systems must be frequently adapted to real-world changes or updated to fix design errors and thus data structures must change accordingly in order to address the new requirements. A consequence of schema evolution is that documents instance of the original schema might not be valid anymore. Currently, users have to explicitly revalidate the documents and identify the parts to be updated. Moreover, once the parts that are not valid anymore have been identified, they have to be explicitly updated. All these activities are time consuming and error prone and automatic facilities are required. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
27. Synopses Reconciliation Via Calibration in the τ-Synopses System.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Matia, Yariv, Matias, Yossi, and Portman, Leon
- Abstract
The τ-Synopses system was designed to provide a run-time environment for multiple synopses. We focus on its utilization for synopses management in a single server. In this case, a critical function of the synopses management module is that of synopses reconciliation: given some limited memory space resource, determine which synopses to build and how to allocate the space among those synopses. We have developed a novel approach of synopses calibration for an efficient computation of synopses error estimation. Consequently we can now perform the synopses reconciliation in a matter of minutes, rather than hours. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
28. TeNDaX, a Collaborative Database-Based Real-Time Editor System.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Leone, Stefania, Hodel-Widmer, Thomas B., Boehlen, Michael, and Dittrich, Klaus R.
- Abstract
TeNDaX is a collaborative database-based real-time editor system. TeNDaX is a new approach for word-processing in which documents (i.e. content and structure, tables, images etc.) are stored in a database in a semi-structured way. This supports the provision of collaborative editing and layout, undo- and redo operations, business process definition and execution within documents, security, and awareness. During document creation process and use meta data is gathered automatically. This meta data can then be used for the TeNDaX dynamic folders, data lineage, visual- and text mining and search. We present TeNDaX as a word-processing ‘LAN-Party': collaborative editing and layout; business process definition and execution; local and global undo- and redo operations; all based on the use of multiple editors and different operating systems. In a second step we demonstrate how one can use the data and meta data to create dynamic folders, visualize data provenance, carry out visual- and text mining and support sophisticated search functionality. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
29. Hermes - A Framework for Location-Based Data Management.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Pelekis, Nikos, Theodoridis, Yannis, Vosinakis, Spyros, and Panayiotopoulos, Themis
- Abstract
The aim of this paper is to demonstrate Hermes, a robust framework capable of aiding a spatio-temporal database developer in modeling, constructing and querying a database with dynamic objects that change location, shape and size, either discretely or continuously in time. Hermes provides spatio-temporal functionality to state-of-the-art Object-Relational DBMS (ORDBMS). The prototype has been designed as an extension of STAU [6], which provides data management infrastructure for historical moving objects, so as to additionally support the demands of real time dynamic applications (e.g. Location-Based Services - LBS). The produced type system is packaged and provided as a data cartridge using the extensibility interface of Oracle10g. The offspring of the above framework extends PL/SQL with spatio-temporal semantics. The serviceableness of the resulting query language is demonstrated by realizing queries that have been proposed in [9] as a benchmarking framework for the evaluation of LBS. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
30. Natix Visual Interfaces.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Böhm, A., Brantner, M., Kanne, C-C., May, N., and Moerkotte, G.
- Abstract
We present the architecture of Natix V2. Among the features of this native XML Data Store are an optimizing XPath query compiler and a powerful API. In our demonstration we explain this API and present XPath evaluation in Natix using its visual explain facilities. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
31. Managing and Querying Versions of Multiversion Data Warehouse.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Wrembel, Robert, and Morzy, Tadeusz
- Abstract
A data warehouse (DW) is a database that integrates external data sources (EDSs) for the purpose of advanced data analysis. The methods of designing a DW usually assume that a DW has a static schema and structures of dimensions. In practice, schema and dimensions' structures often change as the result of the evolution of EDSs, changes of the real world represented in a DW, new user requirements, new versions of software being installed, and system tuning activities. Examples of various change scenarios can be found in [1,8]. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
32. XG: A Grid-Enabled Query Processing Engine.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Sion, Radu, Natarajan, Ramesh, Narang, Inderpal, and Phan, Thomas
- Abstract
In [12] we introduce a novel architecture for data processing, based on a functional fusion between a data and a computation layer. In this demo we show how this architecture is leveraged to offer significant speedups for data processing jobs such as data analysis and mining over large data sets. One novel contribution of our solution is its data-driven approach. The computation infrastructure is controlled from within the data layer. Grid compute job submission events are based within the query processor on the DBMS side and in effect controlled by the data processing job to be performed. This allows the early deployment of on-the-fly data aggregation techniques, minimizing the amount of data to be transferred to/from compute nodes and is in stark contrast to existing Grid solutions that interact with data layers as external (mainly) "storage" components. By integrating scheduling intelligence in the data layer itself we show that it is possible to provide a close to optimal solution to the more general grid trade-off between required data replication costs and computation speed-up benefits. We validate this in a scenario derived from a real business deployment, involving financial customer profiling using common types of data analytics. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
33. Another Example of a Data Warehouse System Based on Transposed Files.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Albano, Antonio, Rosa, Luca, Dumitrescu, Cristian, Goglia, Lucio, Goglia, Roberto, and Minei, Vincenzo
- Abstract
The major commercial data warehouse systems available today are based on record-oriented relational technology optimized for OLTP applications. Several authors have shown that substantial improvements in query performance for OLAP applications can be achieved by systems based on transposed files (column-oriented) technology, since the dominant queries only require grouping and aggregation on a few columns of large amounts of data. This new assumption underlying data warehouse systems means that several aspects of data management and query processing need to be reconsidered. We present some preliminary results of an industrial research project which is being sponsored by the Italian Ministry of Education, University and Research (MIUR) to support the cooperation of universities and industries in prototyping innovative systems. The aim of the project is to implement an SQL-compliant prototype data warehouse system based on a transposed file storage system. The paper will focus on the optimization of star queries with group-by.This work was partially supported by the MIUR, under FAR Fund DM 297/99, Project number 11384. The project partners are Advanced Systems, University of Pisa, Department of Computer Science, and University of Sannio, Research Centre on Software Technology. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
34. Enabling Outsourced Service Providers to Think Globally While Acting Locally.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Wilkinson, Kevin, Kuno, Harumi, Govindarajan, Kannan, Yuasa, Kei, Smathers, Kevin, Nanda, Jyotirmaya, and Dayal, Umeshwar
- Abstract
Enterprises commonly outsource all or part of their IT to vendors as a way to reduce the cost of IT, to accurately estimate what they spend on IT, and to improve its effectiveness. These contracts vary in complexity from the outsourcing of a world-wide IT function to smaller, country-specific, deals. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
35. Managing Collections of XML Schemas in Microsoft SQL Server 2005.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Pal, Shankar, Tomic, Dragan, Berg, Brandon, and Xavier, Joe
- Abstract
Schema evolution is of two kinds: (a) those requiring instance transformation because the application is simpler to develop when it works only with one version of the schema, and (b) those in which the old data must be preserved and instance transformation must be avoided. The latter is important in practice but has received scant attention in the literature. Data conforming to multiple versions of the XML schema must be maintained, indexed, and manipulated using the same query. Microsoft's SQL Server 2005 introduces XML schema collections to address both types of schema evolution. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
36. Improving DB2 Performance Expert - A Generic Analysis Framework.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Mignet, Laurent, Basak, Jayanta, Bhide, Manish, Roy, Prasan, Roy, Sourashis, Sengar, Vibhuti S., Vatsavai, Ranga R., Reichert, Michael, Steinbach, Torsten, Ravikant, D.V.S., and Vadapalli, Soujanya
- Abstract
The complexity of software has been dramatically increasing over the years. Database management systems have not escaped this complexity. On the contrary, this problem is aggravated in database systems because they try to integrate multiple paradigms (object, relational, XML) in one box and are supposed to perform well in every scenario unlike OLAP or OLTP. As a result, it is very difficult to fine tune the performance of a DBMS. Hence, there is a need for a external tool which can monitor and fine tune the DBMS. In this extended abstract, we describe a few techniques to improve DB2 Performance Expert, which helps in monitoring DB2. Specifically, we describe a component which is capable of doing early performance problem detection by analyzing the sensor values over a long period of time. We also showcase a trends plotter and workload characterizer which allows a DBA to have a better understanding of the resource usages. A prototype of these tools has been demonstrated to a few select customers and based on their feedback this paper outlines the various issues that still need to be addressed in the next versions of the tool. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
37. Integrating a Maximum-Entropy Cardinality Estimator into DB2 UDB.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Kutsch, Marcel, Haas, Peter J., Markl, Volker, Megiddo, Nimrod, and Tran, Tam Minh
- Abstract
When comparing alternative query execution plans (s), a cost-based query optimizer in a relational database management system () needs to estimate the selectivity of conjunctive predicates. The optimizer immediately faces a challenging problem: how to combine available partial information about selectivities in a consistent and comprehensive manner [1]. This paper describes a prototype solution to this problem. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
38. The Design and Architecture of the τ-Synopses System.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Matias, Yossi, Portman, Leon, and Drukh, Natasha
- Abstract
Data synopses are concise representations of data sets, that enable effective processing of approximate queries to the data sets. The τ-Synopses is a system designed to provide a run-time environment for remote execution of multiple synopses for both relational as well as XML databases. The system can serve as an effective research platform for experimental evaluation and comparison of different synopses, as well as a platform for studying the effective management of multiple synopses in a federated or centralized environment. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
39. BISON: Providing Business Information Analysis as a Service.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Hacıgümüş, Hakan, Rhodes, James, Spangler, Scott, and Kreulen, Jeffrey
- Abstract
We present the architecture of a Business Information Analysis provisioning system, BISON. The service provisioning system combines two prominent domains, namely structured/unstructured data analysis and service-oriented computing. We also discuss open research problems in the area. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
40. A Metric Definition, Computation, and Reporting Model for Business Operation Analysis.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Casati, Fabio, Castellanos, Malu, Dayal, Umeshwar, and Shan, Ming-Chien
- Abstract
This paper presents a platform, called Business Cockpit, that allows users to define, compute, monitor, and analyze business and IT metrics on business activities. The problem with existing approaches to metric definition and computation is that they require a very significant development and maintenance effort. The cockpit overcomes this problem by providing users with a set of abstractions used to model the problem space, as well as development and runtime environments that support these abstractions. The cockpit is based on three conceptual models: the business domain model defines the business data to be analyzed, the metric model defines the business metrics of interest for the user, and the reporting model defines how metrics should be aggregated and presented in the reports. The proposed approach provides the following key benefits: i) it allows the definition of many different reports without writing code; ii) it reduces metric computation times; iii) it enables the definition of different ways of computing a metric based on the characteristic of the object being measured; iv) all the code of the cockpit is independent of the business domain to be managed. As such, it can be applied to many scenarios. Domain independence, however, is not achieved at the expense of complexity in the configuration: to apply the cockpit to a given domain, users are simply required to provide an abstract description of the part of their data model that is useful for business operation analysis purposes. The cockpit, and the features described above, have been developed and refined over the past few years. Our research started in the context of business processes, and we have then applied the same concepts to other domains, such as inter-bank transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
41. An ECA Rule Rewriting Mechanism for Peer Data Management Systems.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Zhao, Dan, Mylopoulos, John, Kiringa, Iluju, and Kantere, Verena
- Abstract
Managing coordination among peer databases is at the core of research in peer data management systems. The Hyperion project addresses peer database coordination through Event-Condition-Action (ECA) rules. However, peer databases are intended for non-technical end users, such as a receptionist at a doctor's office or an assistant pharmacist. Such users are not expected to know a technically demanding language for expressing ECA rules that are appropriate for coordinating their respective databases. Accordingly, we propose to offer a library of "standard" rules for coordinating two or more types of peer databases. These rules are defined in terms of assumed standard schemas for the peer databases they coordinate. Once two acquainted peers select such a rule, it can be instantiated so that it can operate for their respective databases. In this paper, we propose a mechanism for rewriting given standard rules into rules expressed in terms of the schemas of the two databases that are being coordinated. The rewriting is supported by Global-As-View mappings that are supposed to pre-exist between specific schemas and standard ones. More specifically, we propose a standard rule rewriting algorithm which we have implemented and evaluated. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
42. Querying and Updating Probabilistic Information in XML.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Abiteboul, Serge, and Senellart, Pierre
- Abstract
We present in this paper a new model for representing probabilistic information in a semi-structured (XML) database, based on the use of probabilistic event variables. This work is motivated by the need of keeping track of both confidence and lineage of the information stored in a semi-structured warehouse. For instance, the modules of a (Hidden Web) content warehouse may derive information concerning the semantics of discovered Web services that is by nature not certain. Our model, namely the fuzzy tree model, supports both querying (tree pattern queries with join) and updating (transactions containing an arbitrary set of insertions and deletions) over probabilistic tree data. We highlight its expressive power and discuss implementation issues. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
43. A Framework for Distributed XML Data Management.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Abiteboul, Serge, Manolescu, Ioana, and Taropa, Emanuel
- Abstract
As data management applications grow more complex, they may need efficient distributed query processing, but also subscription management, data archival etc. To enact such applications, the current solution consists of stacking several systems together. The juxtaposition of different computing models prevents reasoning on the application as a whole, and wastes important opportunities to improve performance. We present a simple extension to the AXML [7] language, allowing it to declaratively specify and deploy complex applications based solely on XML and XML queries. Our main contribution is a full algebraic model for complex distributed AXML computations. While very expressive, the model is conceptually uniform, and enables numerous powerful optimizations across a distributed complex process.This work was partially supported by the French Government ACI MDP2P and the eDos EU project. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
44. Evolving Triggers for Dynamic Environments.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Trajcevski, Goce, Scheuermann, Peter, Ghica, Oliviu, Hinze, Annika, and Voisard, Agnes
- Abstract
In this work we address the problem of managing the reactive behavior in distributed environments in which data continuously changes over time, where the users may need to explicitly express how the triggers should be (self) modified. To enable this we propose the (ECA)2 - Evolving and Context-Aware Event-Condition-Action paradigm for specifying triggers that capture the desired reactive behavior in databases which manage distributed and continuously changing data. Since both the monitored event and the condition part of the trigger may be continuous in nature, we introduce the concept of metatriggers to coordinate the detection of events and the evaluation of conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
45. Caching Complementary Space for Location-Based Services.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Lee, Ken C.K., Lee, Wang-Chien, Zheng, Baihua, and Xu, Jianliang
- Abstract
In this paper, we propose a novel client-side, multi-granularity caching scheme, called "Complementary Space Caching" (CS caching), for location-based services in mobile environments. Different from conventional data caching schemes that only cache a portion of dataset, CS caching maintains a global view of the whole dataset. Different portions of this view are cached in varied granularity based on the probabilities of being accessed in the future queries. The data objects with very high access probabilities are cached in the finest granularity, i.e., the data objects themselves. The data objects which are less likely to be accessed in the near future are abstracted and logically cached in the form of complementary regions (CRs) in a coarse granularity. CS caching naturally supports all types of location-based queries. In this paper, we explore several design and system issues of CS caching, including cache memory allocation between objects and CRs, and CR coalescence. We develop algorithms for location-based queries and a cache replacement mechanism. Through an extensive performance evaluation, we show that CS caching is superior to existing caching schemes for location-based services. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
46. SCUBA: Scalable Cluster-Based Algorithm for Evaluating Continuous Spatio-temporal Queries on Moving Objects.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Nehme, Rimma V., and Rundensteiner, Elke A.
- Abstract
In this paper, we propose, SCUBA, a Scalable Cluster Based Algorithm for evaluating a large set of continuous queries over spatio-temporal data streams. The key idea of SCUBA is to group moving objects and queries based on common spatio-temporal properties at run-time into moving clusters to optimize query execution and thus facilitate scalability. SCUBA exploits shared cluster-based execution by abstracting the evaluation of a set of spatio-temporal queries as a spatial join first between moving clusters. This cluster-based filtering prunes true negatives. Then the execution proceeds with a fine-grained within-moving-cluster join process for all pairs of moving clusters identified as potentially joinable by a positive cluster-join match. A moving cluster can serve as an approximation of the location of its members. We show how moving clusters can serve as means for intelligent load shedding of spatio-temporal data to avoid performance degradation with minimal harm to result quality. Our experiments on real datasets demonstrate that SCUBA can achieve a substantial improvement when executing continuous queries on spatio-temporal data streams. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
47. Distributed Spatial Clustering in Sensor Networks.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Meka, Anand, and Singh, Ambuj K.
- Abstract
Sensor networks monitor physical phenomena over large geographic regions. Scientists can gain valuable insight into these phenomena, if they understand the underlying data distribution. Such data characteristics can be efficiently extracted through spatial clustering, which partitions the network into a set of spatial regions with similar observations. The goal of this paper is to perform such a spatial clustering, specifically δ-clustering, where the data dissimilarity between any two nodes inside a cluster is at most δ. We present an in-network clustering algorithm ELink that generates good δ-clusterings for both synchronous and asynchronous networks in time and in O(N) message complexity, where N denotes the network size. Experimental results on both real world and synthetic data sets show that ELink's clustering quality is comparable to that of a centralized algorithm, and is superior to other alternative distributed techniques. Furthermore, ELink performs 10 times better than the centralized algorithm, and 3-4 times better than the distributed alternatives in communication costs. We also develop a distributed index structure using the generated clusters that can be used for answering range queries and path queries. The query algorithms direct the spatial search to relevant clusters, leading to performance gains of up to a factor of 5 over competing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
48. Fast Computation of Reachability Labeling for Large Graphs.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Cheng, Jiefeng, Yu, Jeffrey Xu, Lin, Xuemin, Wang, Haixun, and Yu, Philip S.
- Abstract
The need of processing graph reachability queries stems from many applications that manage complex data as graphs. The applications include transportation network, Internet traffic analyzing, Web navigation, semantic web, chemical informatics and bio-informatics systems, and computer vision. A graph reachability query, as one of the primary tasks, is to find whether two given data objects, u and v, are related in any ways in a large and complex dataset. Formally, the query is about to find if v is reachable from u in a directed graph which is large in size. In this paper, we focus ourselves on building a reachability labeling for a large directed graph, in order to process reachability queries efficiently. Such a labeling needs to be minimized in size for the efficiency of answering the queries, and needs to be computed fast for the efficiency of constructing such a labeling. As such a labeling, 2-hop cover was proposed for arbitrary graphs with theoretical bounds on both the construction cost and the size of the resulting labeling. However, in practice, as reported, the construction cost of 2-hop cover is very high even with super power machines. In this paper, we propose a novel geometry-based algorithm which computes high-quality 2-hop cover fast. Our experimental results verify the effectiveness of our techniques over large real and synthetic graph datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
49. Finding Equivalent Rewritings in the Presence of Arithmetic Comparisons.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Afrati, Foto, Chirkova, Rada, Gergatsoulis, Manolis, and Pavlaki, Vassia
- Abstract
The problem of rewriting queries using views has received significant attention because of its applications in a wide variety of data-management problems. For select-project-join SQL (a.k.a. conjunctive) queries and views, there are efficient algorithms in the literature, which find equivalent and maximally contained rewritings. In the presence of arithmetic comparisons (ACs) the problem becomes more complex. We do not know how to find maximally contained rewritings in the general case. There are algorithms which find maximally contained rewritings only for special cases such as when ACs are restricted to be semi-interval. However, we know that the problem of finding an equivalent rewriting (if there exists one) in the presence of ACs is decidable, yet still doubly exponential. This complexity calls for an efficient algorithm which will perform better on average than the complete enumeration algorithm. In this work we present such an algorithm which is sound and complete. Its efficiency lies in that it considers fewer candidate rewritings because it includes a preliminary test to decide for each view whether it is potentially useful in some rewriting. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
50. Multi-query SQL Progress Indicators.
- Author
-
Ioannidis, Yannis, Scholl, Marc H., Schmidt, Joachim W., Matthes, Florian, Hatzopoulos, Mike, Boehm, Klemens, Kemper, Alfons, Grust, Torsten, Boehm, Christian, Luo, Gang, Naughton, Jeffrey F., and Yu, Philip S.
- Abstract
Recently, progress indicators have been proposed for SQL queries in RDBMSs. All previously proposed progress indicators consider each query in isolation, ignoring the impact simultaneously running queries have on each other's performance. In this paper, we explore a multi-query progress indicator, which explicitly considers concurrently running queries and even queries predicted to arrive in the future when producing its estimates. We demonstrate that multi-query progress indicators can provide more accurate estimates than single-query progress indicators. Moreover, we extend the use of progress indicators beyond being a GUI tool and show how to apply multi-query progress indicators to workload management. We report on an initial implementation of a multi-query progress indicator in PostgreSQL and experiments with its use both for estimating remaining query execution time and for workload management. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.