28 results on '"Databases as Topic classification"'
Search Results
2. Administrative versus clinical databases.
- Author
-
Subramanian MP, Hu Y, Puri V, and Kozower BD
- Subjects
- Biomedical Research methods, Data Accuracy, Humans, Information Storage and Retrieval methods, Information Storage and Retrieval statistics & numerical data, Observer Variation, Registries statistics & numerical data, Databases as Topic classification, Databases as Topic statistics & numerical data, Management Information Systems statistics & numerical data, Medical Informatics methods, Outcome Assessment, Health Care methods, Quality Improvement organization & administration, Thoracic Surgery methods, Thoracic Surgery standards, Thoracic Surgery statistics & numerical data
- Published
- 2021
- Full Text
- View/download PDF
3. Ontology-guided feature engineering for clinical text classification.
- Author
-
Garla VN and Brandt C
- Subjects
- Cardiovascular Diseases, Data Mining, Databases as Topic classification, Humans, Medical Informatics Applications, Models, Theoretical, Obesity, Semantics, Unified Medical Language System, Algorithms, Natural Language Processing
- Abstract
In this study we present novel feature engineering techniques that leverage the biomedical domain knowledge encoded in the Unified Medical Language System (UMLS) to improve machine-learning based clinical text classification. Critical steps in clinical text classification include identification of features and passages relevant to the classification task, and representation of clinical text to enable discrimination between documents of different classes. We developed novel information-theoretic techniques that utilize the taxonomical structure of the Unified Medical Language System (UMLS) to improve feature ranking, and we developed a semantic similarity measure that projects clinical text into a feature space that improves classification. We evaluated these methods on the 2008 Integrating Informatics with Biology and the Bedside (I2B2) obesity challenge. The methods we developed improve upon the results of this challenge's top machine-learning based system, and may improve the performance of other machine-learning based clinical text classification systems. We have released all tools developed as part of this study as open source, available at http://code.google.com/p/ytex., (Copyright © 2012 Elsevier Inc. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
4. On the parameter optimization of Support Vector Machines for binary classification.
- Author
-
Gaspar P, Carbonell J, and Oliveira JL
- Subjects
- Adult, Humans, Databases as Topic classification, Support Vector Machine
- Abstract
Classifying biological data is a common task in the biomedical context. Predicting the class of new, unknown information allows researchers to gain insight and make decisions based on the available data. Also, using classification methods often implies choosing the best parameters to obtain optimal class separation, and the number of parameters might be large in biological datasets. Support Vector Machines provide a well-established and powerful classification method to analyse data and find the minimal-risk separation between different classes. Finding that separation strongly depends on the available feature set and the tuning of hyper-parameters. Techniques for feature selection and SVM parameters optimization are known to improve classification accuracy, and its literature is extensive. In this paper we review the strategies that are used to improve the classification performance of SVMs and perform our own experimentation to study the influence of features and hyper-parameters in the optimization process, using several known kernels.
- Published
- 2012
- Full Text
- View/download PDF
5. Use of administrative data to identify colorectal liver metastasis.
- Author
-
Anaya DA, Becker NS, Richardson P, and Abraham NS
- Subjects
- Adult, Aged, Aged, 80 and over, Algorithms, Cohort Studies, Female, Humans, Logistic Models, Male, Middle Aged, Predictive Value of Tests, Reproducibility of Results, Retrospective Studies, Sensitivity and Specificity, United States, United States Department of Veterans Affairs, Colorectal Neoplasms pathology, Current Procedural Terminology, Databases as Topic classification, Health Services Administration classification, International Classification of Diseases, Liver Neoplasms diagnosis, Liver Neoplasms secondary
- Abstract
Background: The ability to identify patients with colorectal cancer (CRC) liver metastasis (LM) using administrative data is unknown. The goals of this study were to evaluate whether administrative data can accurately identify patients with CRCLM and to develop a diagnostic algorithm capable of identifying such patients., Materials and Methods: A retrospective cohort study was conducted to validate the diagnostic and procedural codes found in administrative databases of the Veterans Administration (VA) system. CRC patients evaluated at a major VA center were identified (1997-2008, n = 1671) and classified as having liver-specific ICD-9 and/or CPT codes. The presence of CRCLM was verified by primary chart abstraction in the study sample. Contingency tables were created and the positive predictive value (PPV) for CRCLM was calculated for each candidate administrative code. A multivariate logistic-regression model was used to identify independent predictors (codes) of CRCLM, which were used to develop a diagnostic algorithm. Validity of the algorithm was determined by discrimination (c-statistic) of the model and PPV of the algorithm., Results: Multivariate logistic regression identified ICD-9 diagnosis codes 155.2 (OR 9.7 [95% CI 2.5-38.4]) and 197.7 (84.6 [52.9-135.3]), and procedure code 50.22 (5.9 [1.3-25.5]) as independent predictors of CRCLM diagnosis. The model's discrimination was 0.89. The diagnostic algorithm, defined as the presence of any of these codes, had a PPV of 87%., Conclusions: VA administrative databases reliably identify patients with CRCLM. This diagnostic algorithm is highly predictive of CRCLM diagnosis and can be used for research studies evaluating population-level features of this disease within the VA system., (Published by Elsevier Inc.)
- Published
- 2012
- Full Text
- View/download PDF
6. Databases in veterinary medicine - validation, harmonisation and application: introduction.
- Author
-
Houe H, Egenvall A, Virtala AM, Olafsson T, and Østerås O
- Subjects
- Animals, Animal Diseases classification, Animal Diseases diagnosis, Animal Diseases epidemiology, Animal Diseases etiology, Databases as Topic classification, Records veterinary, Veterinary Medicine
- Published
- 2011
- Full Text
- View/download PDF
7. Organizing research data.
- Author
-
Sestoft P
- Subjects
- Animal Husbandry statistics & numerical data, Animals, Cattle, Lactation, Milk metabolism, Milk statistics & numerical data, Veterinary Medicine statistics & numerical data, Animal Husbandry methods, Databases as Topic classification, Databases as Topic statistics & numerical data, Records veterinary, Veterinary Medicine methods
- Abstract
Research relies on ever larger amounts of data from experiments, automated production equipment, questionnaries, times series such as weather records, and so on. A major task in science is to combine, process and analyse such data to obtain evidence of patterns and correlations.Most research data are on digital form, which in principle ensures easy processing and analysis, easy long-term preservation, and easy reuse in future research, perhaps in entirely unanticipated ways. However, in practice, obstacles such as incompatible or undocumented data formats, poor data quality and lack of familiarity with current technology prevent researchers from making full use of available data.This paper argues that relational databases are excellent tools for veterinary research and animal production; provides a small example to introduce basic database concepts; and points out some concerns that must be addressed when organizing data for research purposes.
- Published
- 2011
- Full Text
- View/download PDF
8. Automatic epileptic seizure detection in EEGs based on line length feature and artificial neural networks.
- Author
-
Guo L, Rivero D, Dorado J, Rabuñal JR, and Pazos A
- Subjects
- Algorithms, Databases as Topic classification, Databases as Topic standards, Electroencephalography classification, Epilepsy classification, Evoked Potentials physiology, Fourier Analysis, Humans, Pattern Recognition, Automated classification, Predictive Value of Tests, Software classification, Software standards, Time Factors, Artificial Intelligence, Electroencephalography methods, Epilepsy diagnosis, Epilepsy physiopathology, Neural Networks, Computer, Pattern Recognition, Automated methods, Signal Processing, Computer-Assisted
- Abstract
About 1% of the people in the world suffer from epilepsy. The main characteristic of epilepsy is the recurrent seizures. Careful analysis of the electroencephalogram (EEG) recordings can provide valuable information for understanding the mechanisms behind epileptic disorders. Since epileptic seizures occur irregularly and unpredictably, automatic seizure detection in EEG recordings is highly required. Wavelet transform (WT) is an effective analysis tool for non-stationary signals, such as EEGs. The line length feature reflects the waveform dimensionality changes and is a measure sensitive to variation of the signal amplitude and frequency. This paper presents a novel method for automatic epileptic seizure detection, which uses line length features based on wavelet transform multiresolution decomposition and combines with an artificial neural network (ANN) to classify the EEG signals regarding the existence of seizure or not. To the knowledge of the authors, there exists no similar work in the literature. A famous public dataset was used to evaluate the proposed method. The high accuracy obtained for three different classification problems testified the great success of the method., ((c) 2010 Elsevier B.V. All rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
9. Evaluation of meta-concepts for information retrieval in a quality-controlled health gateway.
- Author
-
Gehanno JF, Thirion B, and Darmoni SJ
- Subjects
- Abstracting and Indexing, Catalogs as Topic, Health, Internet, Medical Subject Headings, Online Systems, Quality Control, Databases as Topic classification, Information Storage and Retrieval, Vocabulary, Controlled
- Abstract
Background: CISMeF is a French quality-controlled health gateway that uses the MeSH thesaurus. We introduced two new concepts, metaterms (medical specialty which has semantic links with one or more MeSH terms, subheadings and resource types) and resource types., Objective: Evaluate precision and recall of metaterms., Methods: We created 16 pairs of queries. Each pair concerned the same topic, but one used metaterms and one MeSH terms. To assess precision, each document retrieved by the query was classified as irrelevant, partly relevant or fully relevant., Results: The 16 queries yielded 943 documents for metaterm queries and 139 for MeSH term queries. The recall of MeSH term queries was 0.44 (compared to 1 for metaterm queries) and the precision were identical for MeSH term and metaterm queries., Conclusion: Metaconcept such as CISMeF metaterms allows a better recall with a similar precision that MeSH terms in a quality controlled health gateway.
- Published
- 2007
10. Development of an ontology-anchored data warehouse meta-model.
- Author
-
Kamal J, Borlawsky T, and Payne PR
- Subjects
- Information Storage and Retrieval, Models, Theoretical, Databases as Topic classification, Databases as Topic organization & administration, Unified Medical Language System
- Abstract
Data warehouses must provide a flexible data model that is integrated with knowledge and metadata describing their components and contents. To provide for advanced query functionality at The Ohio State University Medical Center (OSUMC), we have developed an abstraction layer, or meta-model for our existing Information Warehouse (IW) in order to conceptually and semantically describe and classify its structure and contents using the UMLS.
- Published
- 2007
11. E-Resources at the Osler Library: special collections databases, the Canadian Health Obituaries Index File, and the Bibliography of Canadian Health Sciences Periodicals.
- Author
-
Lyons C
- Subjects
- Bibliographies as Topic, Biographies as Topic, Canada, History, 20th Century, History, 21st Century, Periodicals as Topic history, Workforce, Databases as Topic classification, Databases as Topic history, History of Medicine, Libraries, Medical history
- Published
- 2006
12. Mapping the future.
- Author
-
McCrone J
- Subjects
- Brain abnormalities, Brain anatomy & histology, Brain metabolism, Humans, Brain Mapping, Databases as Topic classification, Databases as Topic statistics & numerical data, Neurosciences
- Published
- 2003
- Full Text
- View/download PDF
13. Impact of genomic technologies on studies of bacterial gene expression.
- Author
-
Rhodius V, Van Dyk TK, Gross C, and LaRossa RA
- Subjects
- Artificial Gene Fusion methods, Cluster Analysis, Multigene Family, Oligonucleotide Array Sequence Analysis methods, Databases as Topic classification, Gene Expression Regulation, Bacterial, Genomics methods
- Abstract
The ability to simultaneously monitor expression of all genes in any bacterium whose genome has been sequenced has only recently become available. This requires not only careful experimentation but also that voluminous data be organized and interpreted. Here we review the emerging technologies that are impacting the study of bacterial global regulatory mechanisms with a view toward discussing both perceived best practices and the current state of the art. To do this, we concentrate upon examples using Escherichia coli and Bacillus subtilis because prior work in these organisms provides a sound basis for comparison.
- Published
- 2002
- Full Text
- View/download PDF
14. Organizational context and taxonomy of health care databases.
- Author
-
Shatin D
- Subjects
- Cost-Benefit Analysis, Databases as Topic classification, Databases as Topic economics, Delivery of Health Care classification, Delivery of Health Care economics, Eligibility Determination classification, Eligibility Determination economics, Eligibility Determination organization & administration, Fees, Pharmaceutical, Health Maintenance Organizations classification, Health Maintenance Organizations economics, Health Maintenance Organizations organization & administration, Health Services Research classification, Health Services Research economics, Health Services Research organization & administration, Humans, Insurance Benefits economics, Medicaid classification, Medicaid economics, Medicaid organization & administration, Pharmacoepidemiology classification, Pharmacoepidemiology economics, Pharmacoepidemiology organization & administration, United States, United States Food and Drug Administration standards, Databases as Topic organization & administration, Delivery of Health Care organization & administration
- Abstract
An understanding of the organizational context and taxonomy of health care databases is essential to appropriately use these data sources for research purposes. Characteristics of the organizational structure of the specific health care setting, including the model type, financial arrangement, and provider access, have implications for accessing and using this data effectively. Additionally, the benefit coverage environment may affect the utility of health care databases to address specific research questions. Coverage considerations that affect pharmacoepidemiologic research include eligibility, the nature of the pharmacy benefit, and regulatory aspects of the treatment under consideration.
- Published
- 2001
- Full Text
- View/download PDF
15. Evaluation of the traumatic coma data bank computed tomography classification for severe head injury.
- Author
-
Vos PE, van Voskuilen AC, Beems T, Krabbe PF, and Vogels OJ
- Subjects
- Adolescent, Adult, Aged, Aged, 80 and over, Analysis of Variance, Child, Child, Preschool, Databases as Topic statistics & numerical data, Female, Glasgow Outcome Scale, Humans, Male, Middle Aged, Observer Variation, Predictive Value of Tests, ROC Curve, Reproducibility of Results, Tomography, X-Ray Computed statistics & numerical data, Trauma Severity Indices, Databases as Topic classification, Head Injuries, Closed classification, Head Injuries, Closed diagnostic imaging, Tomography, X-Ray Computed classification
- Abstract
This study determines the interrater and intrarater reliability of the Traumatic Coma Data Bank (TCDB) computed tomography (CT) scan classification for severe head injury. This classification grades the severity of the injury as follows: I = normal, II = diffuse injury, III = diffuse injury with swelling, IV = diffuse injury with shift, V = mass lesion surgically evacuated, or VI = mass lesion not operated. Patients with severe closed head injury were included. Outcome was assessed using the Glasgow Outcome Score (GOS) at 3 and 6 months. Four observers, two of them classifying the scans twice, independently evaluated CT scans. Of the initial CT scans of 63 patients (36 males, 27 females; age, 34+/-24 years), 6.3% were class I, 26.9% class II, 28.6% class III, 6.3% class IV, 22.2% were class V, and 9.6% class VI. The overall interrater and intrarater reliability was 0.80 and 0.85, respectively. Separate analyses resulted in higher inter- and intrarater reliabilities for the mass lesion categories (V and VI), 0.94 and 0.91, respectively, than the diffuse categories (I-IV) 0.71 and 0.67. Merging category III with IV, and V with VI resulted in inter- and intrarater reliabilities of 0.93 and 0.78, respectively. Glasgow outcome scores after 6 months were as follows: 19 dead (30%), one vegetative (2%), five severely disabled (8%), 17 moderately disabled (27%), and 21 good recovery (33%). Association measures (Sommers' D) between CT and GOS scores were statistically significant for all observers. This study shows a high intra- and interobserver agreement in the assessment of CT scan abnormalities and confirms the predictive power on outcome when the TCDB classification is used.
- Published
- 2001
- Full Text
- View/download PDF
16. A framework for an institutional high level security policy for the processing of medical data and their transmission through the Internet.
- Author
-
Ilioudis C and Pangalos G
- Subjects
- Access to Information legislation & jurisprudence, Canada, Computer Security legislation & jurisprudence, Databases as Topic classification, Databases as Topic legislation & jurisprudence, Education, Professional legislation & jurisprudence, Europe, Humans, Informed Consent legislation & jurisprudence, Medical Informatics Computing legislation & jurisprudence, Patient Rights legislation & jurisprudence, Quality of Health Care legislation & jurisprudence, United States, Computer Security standards, Confidentiality standards, Guidelines as Topic, Internet standards, Medical Informatics Computing standards, Medical Records Systems, Computerized standards, Organizational Policy
- Abstract
Background: The Internet provides many advantages when used for interaction and data sharing among health care providers, patients, and researchers. However, the advantages provided by the Internet come with a significantly greater element of risk to the confidentiality, integrity, and availability of information. It is therefore essential that Health Care Establishments processing and exchanging medical data use an appropriate security policy., Objective: To develop a High Level Security Policy for the processing of medical data and their transmission through the Internet, which is a set of high-level statements intended to guide Health Care Establishment personnel who process and manage sensitive health care information., Methods: We developed the policy based on a detailed study of the existing framework in the EU countries, USA, and Canada, and on consultations with users in the context of the Intranet Health Clinic project. More specifically, this paper has taken into account the major directives, technical reports, law, and recommendations that are related to the protection of individuals with regard to the processing of personal data, and the protection of privacy and medical data on the Internet., Results: We present a High Level Security Policy for Health Care Establishments, which includes a set of 7 principles and 45 guidelines detailed in this paper. The proposed principles and guidelines have been made as generic and open to specific implementations as possible, to provide for maximum flexibility and adaptability to local environments. The High Level Security Policy establishes the basic security requirements that must be addressed to use the Internet to safely transmit patient and other sensitive health care information., Conclusions: The High Level Security Policy is primarily intended for large Health Care Establishments in Europe, USA, and Canada. It is clear however that the general framework presented here can only serve as reference material for developing an appropriate High Level Security Policy in a specific implementation environment. When implemented in specific environments, these principles and guidelines must also be complemented by measures, which are more specific. Even when a High Level Security Policy already exists in an institution, it is advisable that the management of the Health Care Establishment periodically revisits it to see whether it should be modified or augmented.
- Published
- 2001
- Full Text
- View/download PDF
17. Development and assignment of bovine-specific PCR systems for the Texas nomenclature marker genes and isolation of homologous BAC probes.
- Author
-
Gautier M, Laurent P, Hayes H, and Eggen A
- Subjects
- Animals, Chromosome Banding standards, Chromosomes, Artificial, Bacterial genetics, Cricetinae, DNA Primers, DNA Probes, Databases as Topic classification, Humans, Hybrid Cells, In Situ Hybridization, Fluorescence, Karyotyping, Physical Chromosome Mapping standards, Texas, Cattle genetics, Genetic Markers, Polymerase Chain Reaction methods, Terminology as Topic
- Abstract
In 1996, Popescu et al. published the Texas standard nomenclature of the bovine karyotype in which 31 marker genes, already mapped in man, were chosen to permit unambiguous identification and numbering of each bovine chromosome. However, specific PCR systems were not available for each marker gene thus preventing the assignment of part of these markers by somatic cell hybrid analysis. In addition, some difficulties remained with the nomenclature of BTA25, BTA27 and BTA29. In this work, specific PCR systems were developed for each of the marker genes except VIL1 (see results), from either existing bovine or human sequences, and a bovine BAC library was screened to obtain the corresponding BAC clones. These PCR systems were used successfully to confirm the assignment of each marker gene (except for LDHA, see results) by analysis on the INRA hamster-bovine somatic cell hybrid panel. The difficulties observed for LDHA and VIL1 are probably due to the fact that these genes belong to large gene families and therefore suggest that they may not be the most appropriate markers for a standardisation effort. This panel of BACs is available to the scientific community and has served as a basis for the establishment of a revised standard nomenclature of bovine chromosomes.
- Published
- 2001
- Full Text
- View/download PDF
18. The national cancer data base: what does it mean to the community surgeon?
- Author
-
Bland KI, Menck HR, Partridge EE, Fremgen A, Scott-Conner CE, Hundahl S, Winchester DP, and Morrow M
- Subjects
- Centers for Disease Control and Prevention, U.S., Data Collection, Hospitals, Community, Humans, Information Services, Patient Care, Registries, SEER Program, Treatment Outcome, United States, Databases as Topic classification, Databases as Topic organization & administration, Databases as Topic standards, Databases as Topic trends, General Surgery, Neoplasms therapy
- Published
- 2000
- Full Text
- View/download PDF
19. Integration and beyond: panel discussion.
- Author
-
Stead WW, Miller RA, Musen MA, and Hersh WR
- Subjects
- Abstracting and Indexing, Databases as Topic classification, Medical Informatics trends, Software, Databases as Topic organization & administration, Medical Informatics organization & administration, Medical Informatics Applications, Systems Integration
- Published
- 2000
- Full Text
- View/download PDF
20. DataFoundry: information management for scientific data.
- Author
-
Critchlow T, Fidelis K, Ganesh M, Musick R, and Slezak T
- Subjects
- Computer Systems, Costs and Cost Analysis, Database Management Systems classification, Database Management Systems economics, Database Management Systems organization & administration, Humans, Information Services organization & administration, Information Systems classification, Information Systems economics, Information Systems organization & administration, Systems Integration, Databases as Topic classification, Databases as Topic economics, Databases as Topic organization & administration, Information Management classification, Information Management economics, Information Management organization & administration, Science
- Abstract
Data warehouses and data marts have been successfully applied to a multitude of commercial business applications. They have proven to be invaluable tools by integrating information from distributed, heterogeneous sources and summarizing this data for use throughout the enterprise. Although the need for information dissemination is as vital in science as in business, working warehouses in this community are scarce because traditional warehousing techniques do not transfer to scientific environments. There are two primary reasons for this difficulty. First, schema integration is more difficult for scientific databases than for business sources, because of the complexity of the concepts and the associated relationships. While this difference has not yet been fully explored, it is an important consideration when determining how to integrate autonomous sources. Second, scientific data sources have highly dynamic data representations (schemata). When a data source participating in a warehouse changes its schema, both the mediator transferring data to the warehouse and the warehouse itself need to be updated to reflect these modifications. The cost of repeatedly performing these updates in a traditional warehouse, as is required in a dynamic environment, is prohibitive. This paper discusses these issues within the context of the DataFoundry project, an ongoing research effort at Lawrence Livermore National Laboratory. DataFoundry utilizes a unique integration strategy to identify corresponding instances while maintaining differences between data from different sources, and a novel architecture and an extensive meta-data infrastructure, which reduce the cost of maintaining a warehouse.
- Published
- 2000
- Full Text
- View/download PDF
21. Using administrative databases for outcomes research: select examples from VA Health Services Research and Development.
- Author
-
Cowper DC, Hynes DM, Kubal JD, and Murphy PA
- Subjects
- Ambulatory Care, Death Certificates, Hospitalization, Humans, Medical Records Systems, Computerized, Patient Identification Systems, Patient-Centered Care, United States, Databases as Topic classification, Databases as Topic organization & administration, Health Services Research, Management Information Systems classification, Outcome Assessment, Health Care, United States Department of Veterans Affairs
- Abstract
The U.S. Department of Veterans Affairs (VA) operates and maintains one of the largest health care systems under a single management structure in the world. The coordination of administrative and clinical information on veterans served by the VA health care system is a daunting and critical function of the Department. This article provides an overview of VA Health Services Research and Development Service initiatives to assist researchers in using extant VA databases to study patient-centered health care outcomes. As examples, studies using the VA's Patient Treatment File (PTF) and the Beneficiary Identification and Records Locator System (BIRLS) Death File are described.
- Published
- 1999
- Full Text
- View/download PDF
22. Secondary data bases and their use in outcomes research: a review of the area resource file and the Healthcare Cost and Utilization Project.
- Author
-
Best AE
- Subjects
- Centers for Medicare and Medicaid Services, U.S., Cost-Benefit Analysis, Health Maintenance Organizations, Health Services Research, Hospitals, Humans, Medicare, Quality of Health Care, United States, United States Agency for Healthcare Research and Quality, Databases as Topic classification, Databases as Topic economics, Databases as Topic organization & administration, Health Care Costs trends, Health Resources classification, Health Resources economics, Health Resources organization & administration, Health Services statistics & numerical data, Outcome Assessment, Health Care
- Abstract
Secondary data sources are being used more frequently than ever in outcomes research. The speed and relative low cost of these data bases makes them ideal for analyzing outcomes. Today's researcher has numerous secondary data bases available for use. Few publications exist to help researchers locate the ideal data set for their needs. Herein, two national secondary data bases are reviewed: the Area Resource File (ARF) and the Healthcare Cost and Utilization Project (HCUP). These two data sets represent the two types of secondary data: aggregate and individual. ARF represents an aggregate data set and HCUP represents an individual data set. The advantages of each type of secondary data will also be reviewed.
- Published
- 1999
- Full Text
- View/download PDF
23. Using Department of Veterans Affairs Administrative databases to examine long-term care utilization for men and women veterans.
- Author
-
Guihan M, Weaver FM, Cowper DC, Nydam T, and Miskevics S
- Subjects
- Age Factors, Aged, Aged, 80 and over, Ambulatory Care statistics & numerical data, Data Collection, Demography, Female, Health Resources, Health Services Needs and Demand, Hospitalization, Humans, Internet, Long-Term Care economics, Male, Manuals as Topic, Nursing Homes statistics & numerical data, Patient Admission, Sex Factors, Time Factors, United States, Women's Health, Databases as Topic classification, Databases as Topic organization & administration, Long-Term Care statistics & numerical data, United States Department of Veterans Affairs, Veterans
- Abstract
We examined long-term care (LTC) utilization by male and female veterans using administrative databases maintained by VA. Research questions included: (1) Which LTC services are utilized? (2) Do utilization patterns of older veterans differ from those of elderly persons in the general U.S. population? (3) Do LTC needs of veterans vary by gender? We were unable to track LTC utilization of individuals across administrative databases. Some databases could only provide information at the national level, or alternatively, were available only at local facilities, or only at the patient or program-level data--making it impossible to get a clear picture of all the services received by an individual. Those planning to use administrative databases to conduct research must: (1) take more time than expected; (2) be flexible/willing to compromise, (3) "ferret out" information, and (4) recognize that because of dynamism inherent in information systems, results may change over time.
- Published
- 1999
- Full Text
- View/download PDF
24. Transparent image access in a distributed picture archiving and communications system: the Master Database broker.
- Author
-
Cox RD, Henri CJ, and Rubin RK
- Subjects
- CD-ROM, Cost-Benefit Analysis, Diagnostic Imaging, Humans, Information Storage and Retrieval, Internet, Computer Communication Networks economics, Databases as Topic classification, Databases as Topic economics, Databases as Topic organization & administration, Radiology Information Systems classification, Radiology Information Systems economics, Radiology Information Systems organization & administration
- Abstract
A distributed design is the most cost-effective system for small-to medium-scale picture archiving and communications systems (PACS) implementations. However, the design presents an interesting challenge to developers and implementers: to make stored image data, distributed throughout the PACS network, appear to be centralized with a single access point for users. A key component for the distributed system is a central or master database, containing all the studies that have been scanned into the PACS. Each study includes a list of one or more locations for that particular dataset so that applications can easily find it. Non-Digital Imaging and Communications in Medicine (DICOM) clients, such as our worldwide web (WWW)-based PACS browser, query the master database directly to find the images, then jump to the most appropriate location via a distributed web-based viewing system. The Master Database Broker provides DICOM clients with the same functionality by translating DICOM queries to master database searches and distributing retrieval requests transparently to the appropriate source. The Broker also acts as a storage service class provider, allowing users to store selected image subsets and reformatted images with the original study, without having to know on which server the original data are stored.
- Published
- 1999
- Full Text
- View/download PDF
25. Evaluation of the Information Sources Map.
- Author
-
Mendonça EA and Cimino JJ
- Subjects
- Evaluation Studies as Topic, Abstracting and Indexing, Databases as Topic classification, Information Storage and Retrieval methods, Subject Headings, Unified Medical Language System
- Abstract
As part of preliminary studies for the development of a digital library, we have studied the possibility of using the UMLS Information Sources Map (ISM) database to provide the means to connect and map different terminologies, as well as to facilitate access to available information sources. The main issues discussed are the indexing of and connection to relevant online sources. We found the features of the ISM to be consistent with the need to support automated source selection and retrieval. However, attention should be paid to three aspects of the information: granularity, completeness, and accuracy. We found the ISM to be potentially useful; however, significant modifications will be required if the ISM is to be able to support automated source selection and retrieval.
- Published
- 1999
26. Automated knowledge acquisition from clinical databases based on rough sets and attribute-oriented generalization.
- Author
-
Tsumoto S
- Subjects
- Algorithms, Congenital Abnormalities, Databases as Topic classification, Fuzzy Logic, Humans, Decision Support Techniques, Diagnosis, Computer-Assisted, Expert Systems
- Abstract
Rule induction methods have been proposed in order to acquire knowledge automatically from databases. However, conventional approaches do not focus on the implementation of induced results into an expert system. In this paper, the author focuses not only on rule induction but also on its evaluation and presents a systematic approach from the former to the latter as follows. First, a rule induction system based on rough sets and attribute-oriented generalization is introduced and was applied to a database of congenital malformation to extract diagnostic rules. Then, by the use of the induced knowledge, an expert system which makes a differential diagnosis on congenital disorders is developed. Finally, this expert system was evaluated in an outpatient clinic, the results of which show that the system performs as well as a medical expert.
- Published
- 1998
27. A network of web multimedia medical information servers for a medical school and university hospital.
- Author
-
Denier P, Le Beux P, Delamarre D, Fresnel A, Cleret M, Courtin C, Seka LP, Pouliquen B, Cleran L, Riou C, Burgun A, Jarno P, Leduff F, Lesaux H, and Duvauferrier R
- Subjects
- Artificial Intelligence, Computer-Assisted Instruction, Databases as Topic classification, Databases, Bibliographic, Diagnosis, Computer-Assisted, Education, Medical, France, Humans, Hypermedia, International Cooperation, Local Area Networks, Medical Records Systems, Computerized, Multimedia, Research, Students, Medical, Telecommunications, Terminology as Topic, Computer Communication Networks, Hospitals, University, Information Systems, Schools, Medical
- Abstract
Modern medicine requires a rapid access to information including clinical data from medical records, bibliographic databases, knowledge bases and nomenclature databases. This is especially true for University Hospitals and Medical Schools for training as well as for fundamental and clinical research for diagnosis and therapeutic purposes. This implies the development of local, national and international cooperation which can be enhanced via the use and access to computer networks such as Internet. The development of professional cooperative networks goes with the development of the telecommunication and computer networks and our project is to make these new tools and technologies accessible to the medical students both during the teaching time in Medical School and during the training periods at the University Hospital. We have developed a local area network which communicates between the School of Medicine and the Hospital which takes advantage of the new Web client-server technology both internally (Intranet) and externally by access to the National Research Network (RENATER in France) connected to the Internet network. The address of our public web server is http:(/)/www.med.univ-rennesl.fr.
- Published
- 1997
- Full Text
- View/download PDF
28. What's in a name?
- Subjects
- Animals, Genomic Library, Phenotype, Databases as Topic classification, Drosophila genetics, Terminology as Topic
- Published
- 1996
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.