520 results on '"*DATABASE searching"'
Search Results
2. Searches as data: archiving and sharing search strategies using an institutional data repository.
- Author
-
Rod, Alisa B. and Boruff, Jill T.
- Subjects
- *
DATA curation , *DATA warehousing , *PUBLISHING , *PROFESSIONS , *LIBRARY science , *ACADEMIC libraries , *DATABASE searching , *SYSTEMATIC reviews , *LIBRARY public services , *MEDICAL care research , *DATABASE management , *INFORMATION retrieval , *COMMUNICATION , *DATA mining ,RESEARCH evaluation - Abstract
Background: By defining search strategies and related database exports as code/scripts and data, librarians and information professionals can expand the mandate of research data management (RDM) infrastructure to include this work. This new initiative aimed to create a space in McGill University's institutional data repository for our librarians to deposit and share their search strategies for knowledge syntheses (KS). Case Presentation: The authors, a health sciences librarian and an RDM specialist, created a repository collection of librarian-authored knowledge synthesis (KS) searches in McGill University's Borealis Dataverse collection. We developed and hosted a half-day "Dataverse-a-thon" where we worked with a team of health sciences librarians to develop a standardized KS data management plan (DMP), search reporting documentation, Dataverse software training, and howto guidance for the repository. Conclusion: In addition to better documentation and tracking of KS searches at our institution, the KS Dataverse collection enables sharing of searches among colleagues with discoverable metadata fields for searching within deposited searches. While the initial creation of the DMP and documentation took about six hours, the subsequent deposit of search strategies into the institutional data repository requires minimal effort (e.g., 5-10 minutes on average per deposit). The Dataverse collection also empowers librarians to retain intellectual ownership over search strategies as valuable stand-alone research outputs and raise the visibility of their labor. Overall, institutional data repositories provide specific benefits in facilitating compliance both with PRISMA-S guidance and with RDM best practices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Meeting a need: development and validation of PubMed search filters for immigrant populations.
- Author
-
Wafford, Q. Eileen, Miller, Corinne H., Wescott, Annie B., and Kubilius, Ramune K.
- Subjects
- *
IMMIGRANTS , *ONLINE information services , *DATABASE searching , *INFORMATION retrieval , *DESCRIPTIVE statistics , *MEDLINE , *LIBRARIANS , *MEDICAL research - Abstract
Objective: There is a need for additional comprehensive and validated filters to find relevant references more efficiently in the growing body of research on immigrant populations. Our goal was to create reliable search filters that direct librarians and researchers to pertinent studies indexed in PubMed about health topics specific to immigrant populations. Methods: We applied a systematic and multi-step process that combined information from expert input, authoritative sources, automation, and manual review of sources. We established a focused scope and eligibility criteria, which we used to create the development and validation sets. We formed a term ranking system that resulted in the creation of two filters: an immigrant-specific and an immigrant-sensitive search filter. Results: When tested against the validation set, the specific filter sensitivity was 88.09%, specificity 97.26%, precision 97.88%, and the NNR 1.02. The sensitive filter sensitivity was 97.76%when tested against the development set. The sensitive filter had a sensitivity of 97.14%, specificity of 82.05%, precision of 88.59%, accuracy of 90.94%, and NNR [See Table 1] of 1.13 when tested against the validation set. Conclusion: We accomplished our goal of developing PubMed search filters to help researchers retrieve studies about immigrants. The specific and sensitive PubMed search filters give information professionals and researchers options to maximize the specificity and precision or increase the sensitivity of their search for relevant studies in PubMed. Both search filters generated strong performance measurements and can be used as-is, to capture a subset of immigrantrelated literature, or adapted and revised to fit the unique research needs of specific project teams (e.g. remove UScentric language, add location-specific terminology, or expand the search strategy to include terms for the topic/s being investigated in the immigrant population identified by the filter). There is also a potential for teams to employ the search filter development process described here for their own topics and use. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Roll Your Own SWOT Analyses Using GenAI.
- Author
-
Ojala, Marydee
- Subjects
- *
INTERNET searching , *DATABASES , *DATABASE searching , *ARTIFICIAL intelligence , *LIBRARIANS , *STRATEGIC planning , *INFORMATION retrieval , *BUSINESS intelligence ,PLANNING techniques - Abstract
The article discusses the utilization of SWOT analyses in business and research. It covers the importance of SWOT matrices, their applications in various contexts like strategic planning and investment decisions, and the distinction between internal strengths/weaknesses and external opportunities/threats.
- Published
- 2024
5. New Bounds and a Generalization for Share Conversion for 3-Server PIR.
- Author
-
Paskin-Cherniavsky, Anat and Nissenbaum, Olga
- Subjects
- *
BLOCKCHAINS , *ELECTRONIC information resource searching , *GENERALIZATION , *DATABASE searching , *INFORMATION retrieval , *STREAMING media - Abstract
Private Information Retrieval (PIR) protocols, which allow the client to obtain data from servers without revealing its request, have many applications such as anonymous communication, media streaming, blockchain security, advertisement, etc. Multi-server PIR protocols, where the database is replicated among the non-colluding servers, provide high efficiency in the information-theoretic setting. Beimel et al. in CCC 12' (further referred to as BIKO) put forward a paradigm for constructing multi-server PIR, capturing several previous constructions for k ≥ 3 servers, as well as improving the best-known share complexity for 3-server PIR. A key component there is a share conversion scheme from corresponding linear three-party secret sharing schemes with respect to a certain type of "modified universal" relation. In a useful particular instantiation of the paradigm, they used a share conversion from (2 , 3) -CNF over Z m to three-additive sharing over Z p β for primes p 1 , p 2 , p where p 1 ≠ p 2 and m = p 1 · p 2 , and the relation is modified universal relation C S m . They reduced the question of the existence of the share conversion for a triple (p 1 , p 2 , p) to the (in)solvability of a certain linear system over Z p , and provided an efficient (in m , log p ) construction of such a sharing scheme. Unfortunately, the size of the system is Θ (m 2) which entails the infeasibility of a direct solution for big m's in practice. Paskin-Cherniavsky and Schmerler in 2019 proved the existence of the conversion for the case of odd p 1 , p 2 when p = p 1 , obtaining in this way infinitely many parameters for which the conversion exists, but also for infinitely many of them it remained open. In this work, using some algebraic techniques from the work of Paskin-Cherniavsky and Schmerler, we prove the existence of the conversion for even m's in case p = 2 (we computed β in this case) and the absence of the conversion for even m's in case p > 2 . This does not improve the concrete efficiency of 3-server PIR; however, our result is promising in a broader context of constructing PIR through composition techniques with k ≥ 3 servers, using the relation C S m where m has more than two prime divisors. Another our suggestion about 3-server PIR is that it's possible to achieve a shorter server's response using the relation C S m ′ for extended S m ′ ⊃ S m . By computer search, in BIKO framework we found several such sets for small m's which result in share conversion from (2 , 3) -CNF over Z m to 3-additive secret sharing over Z p β ′ , where β ′ > 0 is several times less than β , which implies several times shorter server's response. We also suggest that such extended sets S m ′ can result in better PIR due to the potential existence of matching vector families with the higher Vapnik-Chervonenkis dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Irreproducibility in searches of scientific literature: A comparative analysis.
- Author
-
Pozsgai, Gábor, Lövei, Gábor L., Vasseur, Liette, Gurr, Geoff, Batáry, Péter, Korponai, János, Littlewood, Nick A., Liu, Jian, Móra, Arnold, Obrycki, John, Reynolds, Olivia, Stockan, Jenni A., VanVolkenburg, Heather, Zhang, Jie, Zhou, Wenwu, and You, Minsheng
- Subjects
- *
SCIENTIFIC literature , *COMPARATIVE literature , *METADATA , *COMPARATIVE studies , *SEARCH algorithms , *DATABASE searching - Abstract
Repeatability is the cornerstone of science, and it is particularly important for systematic reviews. However, little is known on how researchers' choice of database, and search platform influence the repeatability of systematic reviews. Here, we aim to unveil how the computer environment and the location where the search was initiated from influence hit results.We present a comparative analysis of time‐synchronized searches at different institutional locations in the world and evaluate the consistency of hits obtained within each of the search terms using different search platforms.We revealed a large variation among search platforms and showed that PubMed and Scopus returned consistent results to identical search strings from different locations. Google Scholar and Web of Science's Core Collection varied substantially both in the number of returned hits and in the list of individual articles depending on the search location and computing environment. Inconsistency in Web of Science results has most likely emerged from the different licensing packages at different institutions.To maintain scientific integrity and consistency, especially in systematic reviews, action is needed from both the scientific community and scientific search platforms to increase search consistency. Researchers are encouraged to report the search location and the databases used for systematic reviews, and database providers should make search algorithms transparent and revise access rules to titles behind paywalls. Additional options for increasing the repeatability and transparency of systematic reviews are storing both search metadata and hit results in open repositories and using Application Programming Interfaces (APIs) to retrieve standardized, machine‐readable search metadata. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Critical assessment of Shape Retrieval Tools (SRTs).
- Author
-
Xiao, Xinyi, Joshi, Sanjay, and Cecil, J.
- Subjects
- *
SEARCH algorithms , *INFORMATION retrieval , *DATABASE searching , *KEY performance indicators (Management) , *ALGORITHMS , *EVALUATION methodology - Abstract
In today's design — manufacturing context, designers often modify existing 3D shapes (or design models) instead of creating a new design from scratch. This requires the ability to search an existing database of designs/3D models to identify and extract similar designs. Shape Retrieval Tools (SRTs) have been developed to provide an essential role in saving time and effort to retrieve and generate new designs. The capabilities of commercially available SRTs vary based on the form of the input design model, the search technique or algorithm used, the search/retrieval time, ease of use, and the quality of results. The focus of this paper is to study of their capabilities, performances, and differences and develop criteria to compare the effectiveness and performance of such Shape Retrieval Tools. Current search evaluation methods, such as precision and recall, are based on human interpretation of the results. This paper presents a holistic set of metrics for comparing the performance and effectiveness of SRTs, including data input options (to search), effectiveness of the search process, the associated retrieval time, overall ease of use, and additional data retrieval details. An algorithm is proposed to objectively analyze the search results based on the proposed Model Match Ratio (MMR), computed by the variance between the input and retrieved geometries. The search results are usually presented in a rank order list. A Precision Sequence Metric (PSM) is developed to evaluate the retrieved list by ranking the retrieved results based on the MMR for evaluating the quality of the search. The proposed evaluation algorithm was tested on several design models (and their subsequent retrieval results) involving three SRTs (Vizseek, Geolus, and CADENAS); the results of the comparison of the performance of these SRTs are discussed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Development of an efficient search filter to retrieve systematic reviews from PubMed.
- Author
-
Salvador-Oliván, José Antonio, Marco-Cuenca, Gonzalo, and Arquero-Avilés, Rosario
- Subjects
- *
ONLINE information services , *DATABASE searching , *SYSTEMATIC reviews , *SEARCH engines , *INFORMATION retrieval , *MEDLINE - Abstract
Objective: Locating systematic reviews is essential for clinicians and researchers when creating or updating reviews and for decision-making in health care. This study aimed to develop a search filter for retrieving systematic reviews that improves upon the performance of the PubMed systematic review search filter. Methods: Search terms were identified from abstracts of reviews published in Cochrane Database of Systematic Reviews and the titles of articles indexed as systematic reviews in PubMed. Both the precision of the candidate terms and the number of systematic reviews retrieved from PubMed were evaluated after excluding the subset of articles retrieved by the PubMed systematic review filter. Terms that achieved a precision greater than 70% and relevant publication types indexed with MeSH terms were included in the filter search strategy. Results: The search strategy used in our filter added specific terms not included in PubMed's systematic review filter and achieved a 61.3% increase in the number of retrieved articles that are potential systematic reviews. Moreover, it achieved an average precision that is likely greater than 80%. Conclusions: The developed search filter will enable users to identify more systematic reviews from PubMed than the PubMed systematic review filter with high precision. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Characterization and selection of Japanese electronic health record databases used as data sources for non-interventional observational studies.
- Author
-
Wakabayashi, Yumi, Eitoku, Masamitsu, and Suganuma, Narufumi
- Subjects
- *
ELECTRONIC health records , *SCIENTIFIC observation , *MEDICAL registries , *DATABASES , *DATABASE searching , *RESEARCH , *RESEARCH methodology , *RETROSPECTIVE studies , *MEDICAL cooperation , *EVALUATION research , *COMPARATIVE studies , *INFORMATION retrieval , *RESEARCH funding , *LONGITUDINAL method - Abstract
Background: Interventional studies are the fundamental method for obtaining answers to clinical questions. However, these studies are sometimes difficult to conduct because of insufficient financial or human resources or the rarity of the disease in question. One means of addressing these issues is to conduct a non-interventional observational study using electronic health record (EHR) databases as the data source, although how best to evaluate the suitability of an EHR database when planning a study remains to be clarified. The aim of the present study is to identify and characterize the data sources that have been used for conducting non-interventional observational studies in Japan and propose a flow diagram to help researchers determine the most appropriate EHR database for their study goals.Methods: We compiled a list of published articles reporting observational studies conducted in Japan by searching PubMed for relevant articles published in the last 3 years and by searching database providers' publication lists related to studies using their databases. For each article, we reviewed the abstract and/or full text to obtain information about data source, target disease or therapeutic area, number of patients, and study design (prospective or retrospective). We then characterized the identified EHR databases.Results: In Japan, non-interventional observational studies have been mostly conducted using data stored locally at individual medical institutions (663/1511) or collected from several collaborating medical institutions (315/1511). Whereas the studies conducted with large-scale integrated databases (330/1511) were mostly retrospective (73.6%), 27.5% of the single-center studies, 47.6% of the multi-center studies, and 73.7% of the post-marketing surveillance studies, identified in the present study, were conducted prospectively. We used our findings to develop an assessment flow diagram to assist researchers in evaluating and choosing the most suitable EHR database for their study goals.Conclusions: Our analysis revealed that the non-interventional observational studies were conducted using data stored local at individual medical institutions or collected from collaborating medical institutions in Japan. Disease registries, disease databases, and large-scale databases would enable researchers to conduct studies with large sample sizes to provide robust data from which strong inferences could be drawn. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
10. Opportunities and challenges in search interaction.
- Author
-
White, Ryen W.
- Subjects
- *
ELECTRONIC information resource searching , *INFORMATION retrieval , *DATABASE searching , *SEARCH algorithms , *ONLINE databases , *INTELLIGENT personal assistants , *INFORMATION storage & retrieval systems - Abstract
Seeking to address a wider range of user requests toward task completion. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. The NICE OECD countries' geographic search filters: Part 1--methodology for developing the draft MEDLINE and Embase (Ovid) filters.
- Author
-
Ayiku, Lynda, Levay, Paul, and Hudson, Thomas
- Subjects
- *
MEDICAL information storage & retrieval systems , *SUBJECT headings , *DATABASE searching , *MEDICAL protocols , *INFORMATION retrieval , *MEDLINE , *MEDICAL research - Abstract
Objective: There are no existing validated search filters for the group of 37 Organisation for Economic Co-operation and Development (OECD) countries. This study describes how information specialists from the United Kingdom's National Institute for Health and Care Excellence (NICE) developed and evaluated novel OECD countries' geographic search filters for MEDLINE and Embase (Ovid) to improve literature search effectiveness for evidence about OECD countries. Methods: We created the draft filters using an alternative approach to standard filter construction. They are composed entirely of geographic subject headings and are designed to retain OECD country evidence by excluding non-OECD country evidence using the NOT Boolean operator. To evaluate the draft filters' effectiveness, we used MEDLINE and Embase literature searches for three NICE guidelines that retrieved >5,000 search results. A 10% sample of the excluded references was screened to check that OECD country evidence was not inadvertently excluded. Results: The draft MEDLINE filter reduced results for each NICE guideline by 9.5% to 12.9%. In Embase, search results were reduced by 10.7% to 14%. Of the sample references, 7 of 910 (0.8%) were excluded inadvertently. These references were from a guideline about looked-after minors that concerns both OECD and non-OECD countries. Conclusion: The draft filters look promising--they reduced search result volumes while retaining most OECD country evidence from MEDLINE and Embase. However, we advise caution when using them in topics about both non-OECD and OECD countries. We have created final versions of the search filters and will validate them in a future study. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews.
- Author
-
Rethlefsen, Melissa L., Kirtley, Shona, Waffenschmidt, Siw, Ayala, Ana Patricia, Moher, David, Page, Matthew J., and Koffel, Jonathan B.
- Subjects
- *
EXPERIMENTAL design , *CONSENSUS (Social sciences) , *DATABASES , *PROFESSIONAL peer review , *RESEARCH methodology , *SYSTEMATIC reviews , *RESEARCH methodology evaluation , *INTERNET , *DATABASE searching , *INTERNET searching , *CITATION analysis , *DOCUMENTATION , *INFORMATION retrieval , *INFORMATION resources , *DELPHI method ,RESEARCH evaluation - Abstract
Background: Literature searches underlie the foundations of systematic reviews and related review types. Yet, the literature searching component of systematic reviews and related review types is often poorly reported. Guidance for literature search reporting has been diverse and, in many cases, does not offer enough detail to authors who need more specific information about reporting search methods and information sources in a clear, reproducible way. This document presents the PRISMA-S (Preferred Reporting Items for Systematic reviews and Meta-Analyses literature search extension) checklist, and explanation and elaboration. Methods: The checklist was developed using a three-stage Delphi survey process, followed by a consensus conference and public review process. Results: The final checklist includes sixteen reporting items, each of which is detailed with exemplar reporting and rationale. Conclusions: The intent of PRISMA-S is to complement the PRISMA Statement and its extensions by providing a checklist that could be used by interdisciplinary authors, editors, and peer reviewers to verify that each component of a search is completely reported and, therefore, reproducible. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Use of a search summary table to improve systematic review search methods, results, and efficiency.
- Author
-
Bethel, Alison C., Rogers, Morwenna, and Abbott, Rebecca
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *MEDICAL protocols , *PUBLISHING , *SERIAL publications , *SYSTEMATIC reviews , *ACCURACY - Abstract
Background: Systematic reviews are comprehensive, robust, inclusive, transparent, and reproducible when bringing together the evidence to answer a research question. Various guidelines provide recommendations on the expertise required to conduct a systematic review, where and how to search for literature, and what should be reported in the published review. However, the finer details of the search results are not typically reported to allow the search methods or search efficiency to be evaluated. Case Presentation: This case study presents a search summary table, containing the details of which databases were searched, which supplementary search methods were used, and where the included articles were found. It was developed and published alongside a recent systematic review. This simple format can be used in future systematic reviews to improve search results reporting. Conclusions: Publishing a search summary table in all systematic reviews would add to the growing evidence base about information retrieval, which would help in determining which databases to search for which type of review (in terms of either topic or scope), what supplementary search methods are most effective, what type of literature is being included, and where it is found. It would also provide evidence for future searching and search methods research. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Publications search optimization: Comparison of a homegrown—API approach versus manual publication searches at an NCI designated cancer center.
- Author
-
Cernik, Colin, Fife, John, Thompson, Jeffrey, Harlan-Williams, Lisa, and Mudaranthakam, Dinesh Pal
- Subjects
- *
AUTOMATION , *CANCER treatment , *CLINICAL medicine , *COMPUTER software , *COST control , *DATABASE searching , *INFORMATION retrieval , *INFORMATION storage & retrieval systems , *MEDICAL databases , *MEDLINE , *ONLINE information services , *RESEARCH funding , *TIME , *USER interfaces , *ELECTRONIC publications , *DATA mining , *SPECIALTY hospitals , *PERIODICAL articles , *IMPACT factor (Citation analysis) , *DESCRIPTIVE statistics - Abstract
One measure of research productivity within the University of Kansas Cancer Center (KU Cancer Center) is peer-reviewed publications. Considerable effort goes into searching, capturing, reviewing, storing, and reporting cancer-relevant publications. Traditionally, the method of gathering relevant information to the publications is done manually. This manuscript describes the efforts to transition KU Cancer Center's publication gathering process from a heavily manual to a more automated and efficient process. To achieve this transition in the most customized and cost-effective manner, a homegrown, automated system was developed using open source API among other software. When comparing the automated and the manual processes over several years of data, publication search and retrieval time dropped from an average of 59 h to 35 min, which would amount to a cost savings of several thousand dollars per year. The development and adoption of an automated publications search process can offer research centers great potential for less-error prone results with a savings in time and cost. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Which are the most sensitive search filters to identify randomized controlled trials in MEDLINE?
- Author
-
Glanville, Julie, Kotas, Eleanor, Featherstone, Robin, and Dooley, Gordon
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *INFORMATION storage & retrieval systems , *MEDICAL databases , *MEDLINE , *ONLINE information services , *SUBJECT headings , *SEARCH engines , *ACCESS to information , *RANDOMIZED controlled trials , *DESCRIPTIVE statistics - Abstract
Objective: The Cochrane Handbook of Systematic Reviews contains search filters to find randomized controlled trials (RCTs) in Ovid MEDLINE: one maximizing sensitivity and another balancing sensitivity and precision. These filters were originally published in 1994 and were adapted and updated in 2008. To determine the performance of these filters, the authors tested them and thirty-six other MEDLINE filters against a large new gold standard set of relevant records. Methods: We identified a gold standard set of RCT reports published in 2016 from the Cochrane CENTRAL database of controlled clinical trials. We retrieved the records in Ovid MEDLINE and combined these with each RCT filter. We calculated their sensitivity, relative precision, and f-scores. Results: The gold standard comprised 27,617 records. MEDLINE searches were run on July 16, 2019. The most sensitive RCT filter was Duggan et al. (sensitivity=0.99). The Cochrane sensitivity-maximizing RCT filter had a sensitivity of 0.96 but was more precise than Duggan et al. (0.14 compared to 0.04 for Duggan). The most precise RCT filters had 0.97 relative precision and 0.83 sensitivity. Conclusions: The Cochrane Ovid MEDLINE sensitivity-maximizing RCT filter can continue to be used by Cochrane reviewers and to populate CENTRAL, as it has very high sensitivity and a slightly better precision relative to more sensitive filters. The results of this study, which used a very large gold standard to compare the performance of all known RCT filters, allows searchers to make better informed decisions about which filters to use for their work. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Which are the most sensitive search filters to identify randomized controlled trials in MEDLINE?
- Author
-
Glanville, Julie, Kotas, Eleanor, Featherstone, Robin, and Dooley, Gordon
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *INFORMATION storage & retrieval systems , *MEDICAL databases , *MEDLINE , *SEARCH engines , *RANDOMIZED controlled trials - Published
- 2020
- Full Text
- View/download PDF
17. Which are the most sensitive search filters to identify randomized controlled trials in MEDLINE?
- Author
-
Glanville, Julie, Kotas, Eleanor, Featherstone, Robin, and Dooley, Gordon
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *INFORMATION storage & retrieval systems , *MEDICAL databases , *MEDLINE , *ONLINE information services , *SUBJECT headings , *SEARCH engines , *ACCESS to information , *RANDOMIZED controlled trials , *DESCRIPTIVE statistics - Abstract
Objective: The Cochrane Handbook of Systematic Reviews contains search filters to find randomized controlled trials (RCTs) in Ovid MEDLINE: one maximizing sensitivity and another balancing sensitivity and precision. These filters were originally published in 1994 and were adapted and updated in 2008. To determine the performance of these filters, the authors tested them and thirty-six other MEDLINE filters against a large new gold standard set of relevant records. Methods: We identified a gold standard set of RCT reports published in 2016 from the Cochrane CENTRAL database of controlled clinical trials. We retrieved the records in Ovid MEDLINE and combined these with each RCT filter. We calculated their sensitivity, relative precision, and f-scores. Results: The gold standard comprised 27,617 records. MEDLINE searches were run on July 16, 2019. The most sensitive RCT filter was Duggan et al. (sensitivity=0.99). The Cochrane sensitivity-maximizing RCT filter had a sensitivity of 0.96 but was more precise than Duggan et al. (0.14 compared to 0.04 for Duggan). The most precise RCT filters had 0.97 relative precision and 0.83 sensitivity. Conclusions: The Cochrane Ovid MEDLINE sensitivity-maximizing RCT filter can continue to be used by Cochrane reviewers and to populate CENTRAL, as it has very high sensitivity and a slightly better precision relative to more sensitive filters. The results of this study, which used a very large gold standard to compare the performance of all known RCT filters, allows searchers to make better informed decisions about which filters to use for their work. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Efficient identification of patients eligible for clinical studies using case-based reasoning on Scottish Health Research register (SHARE).
- Author
-
Shi, Wen, Kelsey, Tom, and Sullivan, Frank
- Subjects
- *
CASE-based reasoning , *DATABASE searching , *PUBLIC health research , *ELECTRONIC health records , *INFORMATION retrieval - Abstract
Background: Trials often struggle to achieve their target sample size with only half doing so. Some researchers have turned to Electronic Health Records (EHRs), seeking a more efficient way of recruitment. The Scottish Health Research Register (SHARE) obtained patients' consent for their EHRs to be used as a searching base from which researchers can find potential participants. However, due to the fact that EHR data is not complete, sufficient or accurate, a database search strategy may not generate the best case-finding result. The current study aims to evaluate the performance of a case-based reasoning method in identifying participants for population-based clinical studies recruiting through SHARE, and assess the difference between its resultant cohort and the original one deriving from searching EHRs.Methods: A case-based reasoning framework was applied to 119 participants in nine projects using two-fold cross-validation, with records from a further 86,292 individuals used for testing. A prediction score for study participation was derived from the diagnosis, procedure, pharmaceutical prescription, and laboratory test results attributes of each participant. Evaluation was conducted by calculating Area Under the ROC Curve and information retrieval metrics for the ranking list of the test set by prediction score. We compared the most likely participants as identified by searching a database to those ranked highest by our model.Results: The average ROCAUC for nine projects was 81% indicating strong predictive ability for these data. However, the derived ranking lists showed lower predictive performance, with only 21% of the persons ranked within top 50 positions being the same as identified by searching databases.Conclusions: Case-based reasoning is may be more effective than a database search strategy for participant identification for clinical studies using population EHRs. The lower performance of ranking lists derived from case-based reasoning means that patients identified as highly suitable for study participation may still not be recruited. This suggests that further study is needed into improvements in the collection and curation of population EHRs, such as use of free text data to aid reliable identification of people more likely to be recruited to clinical trials. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
19. Patterns for Searching Data on the Web Across Different Research Communities.
- Author
-
Borst, Timo and Limani, Fidan
- Subjects
- *
SCIENTIFIC community , *INTERNET searching , *DATABASE searching , *WEB-based user interfaces , *INFORMATION retrieval - Abstract
Being a concept quite familiar in the domain of information retrieval, data search in a web based environment has recently gained attention. With researchers and academic institutions increasingly publishing their data on the public web, traditional research workflows with respect to data search are subject to empirical analysis, user studies, re-engineering and service development. We investigate these workflows more in detail and introduce three patterns of web-based data search intended to serve both as a general reference and as a starting point for discipline specific adoptions. We give some real-world examples in terms of existing web applications and GUI components, thereby suggesting a combination of both generic and community specific approaches towards solutions for data search. We further analyze these patterns by means of empirical evidences we found in some research communities, before giving a summary and outlook on future work. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Usability Test Results for a Discovery Tool in an Academic Library.
- Author
-
Condit Fagan, Jody, Mandernach, Meris, Nelson, Carl S., Paulo, Jonathan R., and Saunders, Grover
- Subjects
- *
INFORMATION retrieval , *ACADEMIC libraries , *COMPUTER software , *CUSTOMER satisfaction , *DATABASE searching , *INFORMATION storage & retrieval systems , *LIBRARY orientation , *RESEARCH methodology , *METADATA , *SCIENTIFIC observation , *QUESTIONNAIRES , *SCALES (Weighing instruments) , *SERIAL publications , *SYSTEMS design , *USER interfaces , *KEYWORD searching , *BIBLIOGRAPHIC databases , *UNOBTRUSIVE measures , *EVALUATION research , *ONLINE library catalogs , *INFORMATION-seeking behavior , *CONTENT mining - Abstract
Discovery tools are emerging in libraries. These tools offer library patrons the ability to concurrently search the library catalog and journal articles. While vendors rush to provide feature-rich interfaces and access to as much content as possible, librarians wonder about the usefulness of these tools to library patrons. To learn about both the utility and usability of EBSCO Discovery Service, James Madison University (JMU) conducted a usability test with eight students and two faculty members. The test consisted of nine tasks focused on common patron requests or related to the utility of specific discovery tool features. Software recorded participants' actions and time on task, human observers judged the success of each task, and a post-survey questionnaire gathered qualitative feedback and comments from the participants. Participants were successful at most tasks, but specific usability problems suggested some interface changes for both EBSCO Discovery Service and JMU's customizations of the tool. The study also raised several questions for libraries above and beyond any specific discovery-tool interface, including the scope and purpose of a discovery tool versus other library systems, working with the large result sets made possible by discovery tools, and navigation between the tool and other library services and resources. This article will be of interest to those who are investigating discovery tools, selecting products, integrating discovery tools into a library web presence, or performing evaluations of similar systems. [ABSTRACT FROM AUTHOR]
- Published
- 2012
21. Usability Testing of a Large, Multidisciplinary Library Database: Basic Search and Visual Search.
- Author
-
Fagan, Jody Condit
- Subjects
- *
LIBRARIES , *COMPUTER interfaces , *INFORMATION services , *LIBRARY information networks , *LIBRARY users , *TESTING , *DATABASE searching , *ELECTRONIC information resource searching , *INFORMATION retrieval - Abstract
Visual search interfaces have been shown by researchers to assist users with information search and retrieval. Recently, several major library vendors have added visual search interfaces or functions to their products. For public service librarians, perhaps the most critical area of interest is the extent to which visual search interfaces and text-based search interfaces support research. This study presents the results of eight full-scale usability tests of both the EBSCOhost Basic Search and Visual Search in the context of a large liberal arts university. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
22. The New Version of MEDLINE: What Searchers Want.
- Author
-
Clancy, Stephen, Keiko Stark, Rachel, and Suk-Ling Murphy, Linda
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *INFORMATION services , *INTERNET , *MEDLINE , *ONLINE information services , *QUALITY assurance , *SURVEYS , *INFORMATION-seeking behavior - Abstract
The article focuses on the user-friendly features were added to PubMed, including an improved interface that allowed users to limit searches by age group, gender, human or animal studies, languages, publication types, dates, and so on; and mentions that many frequent users of PubMed noticed the removal of other features that have been found useful in the past, such as the XML export option.
- Published
- 2020
23. The Use of Filters in Library Discovery.
- Author
-
Resau, Kaci
- Subjects
- *
ACADEMIC libraries , *DATABASE searching , *INFORMATION retrieval , *INTERVIEWING , *SURVEYS , *ACCESS to information , *USER-centered system design , *PRE-tests & post-tests - Abstract
The article focuses on limited use of the filters in library discovery and librarians were using the Primo facets and lack of facet use led me to a deep dive into the analytics. It mentions user behavior for faceted navigation and could be completed within 10 minutes of the users' time and administer the usability test and also considered usability testing software. It also mentions result of digital collections were not directly clickable.
- Published
- 2019
24. Search results outliers among MEDLINE platforms.
- Author
-
Sean Burns, Christopher, Shapiro II, Robert M., Nix, Tyler, and Huber, Jeffrey T.
- Subjects
- *
DATABASE searching , *HEALTH , *INFORMATION retrieval , *MEDLINE , *METADATA , *ONLINE information services , *PROGRAMMING languages , *SEARCH engines , *BIBLIOGRAPHIC databases - Abstract
Objective: Hypothetically, content in MEDLINE records is consistent across multiple platforms. Though platforms have different interfaces and requirements for query syntax, results should be similar when the syntax is controlled for across the platforms. The authors investigated how search result counts varied when searching records among five MEDLINE platforms. Methods: We created 29 sets of search queries targeting various metadata fields and operators. Within search sets, we adapted 5 distinct, compatible queries to search 5 MEDLINE platforms (PubMed, ProQuest, EBSCOhost, Web of Science, and Ovid), totaling 145 final queries. The 5 queries were designed to be logically and semantically equivalent and were modified only to match platform syntax requirements. We analyzed the result counts and compared PubMed's MEDLINE result counts to result counts from the other platforms. We identified outliers by measuring the result count deviations using modified z-scores centered around PubMed's MEDLINE results. Results: Web of Science and ProQuest searches were the most likely to deviate from the equivalent PubMed searches. EBSCOhost and Ovid were less likely to deviate from PubMed searches. Ovid's results were the most consistent with PubMed's but appeared to apply an indexing algorithm that resulted in lower retrieval sets among equivalent searches in PubMed. Web of Science exhibited problems with exploding or not exploding Medical Subject Headings (MeSH) terms. Conclusion: Platform enhancements among interfaces affect record retrieval and challenge the expectation that MEDLINE platforms should, by default, be treated as MEDLINE. Substantial inconsistencies in search result counts, as demonstrated here, should raise concerns about the impact of platform-specific influences on search results. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. DART-ID increases single-cell proteome coverage.
- Author
-
Chen, Albert Tian, Franks, Alexander, and Slavov, Nikolai
- Subjects
- *
TANDEM mass spectrometry , *MONOCYTES , *RF values (Chromatography) , *LEUCOCYTES , *STATISTICAL power analysis , *LIQUID chromatography - Abstract
Analysis by liquid chromatography and tandem mass spectrometry (LC-MS/MS) can identify and quantify thousands of proteins in microgram-level samples, such as those comprised of thousands of cells. This process, however, remains challenging for smaller samples, such as the proteomes of single mammalian cells, because reduced protein levels reduce the number of confidently sequenced peptides. To alleviate this reduction, we developed Data-driven Alignment of Retention Times for IDentification (DART-ID). DART-ID implements principled Bayesian frameworks for global retention time (RT) alignment and for incorporating RT estimates towards improved confidence estimates of peptide-spectrum-matches. When applied to bulk or to single-cell samples, DART-ID increased the number of data points by 30–50% at 1% FDR, and thus decreased missing data. Benchmarks indicate excellent quantification of peptides upgraded by DART-ID and support their utility for quantitative analysis, such as identifying cell types and cell-type specific proteins. The additional datapoints provided by DART-ID boost the statistical power and double the number of proteins identified as differentially abundant in monocytes and T-cells. DART-ID can be applied to diverse experimental designs and is freely available at . [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. A comprehensive scoping review to identify standards for the development of health information resources on the internet.
- Author
-
Abdel-Wahab, Noha, Rai, Devesh, Siddhanamatha, Harish, Dodeja, Abhinav, Suarez-Almazor, Maria E., and Lopez-Olivo, Maria A.
- Subjects
- *
INFORMATION resources , *COMPUTER network resources , *INTERNET access , *MEDLINE , *SEARCH engines - Abstract
Background: Online health information, if evidence-based and unbiased, can improve patients’ and caregivers’ health knowledge and assist them in disease management and health care decision-making. Objective: To identify standards for the development of health information resources on the internet for patients. Methods: We searched in MEDLINE, CINAHL, Scopus, Web of Science, and Google Scholar for publications describing evaluation instruments for websites providing health information. Eligible instruments were identified by three independent reviewers and disagreements resolved by consensus. Items reported were extracted and categorized into seven domains (accuracy, completeness and comprehensiveness, technical elements, design and aesthetics, usability, accessibility, and readability) that were previously thought to be a minimum requirement for websites. Results: One hundred eleven articles met inclusion criteria, reporting 92 evaluation instruments (1609 items). We found 74 unique items that we grouped into the seven domains. For the accuracy domain, one item evaluated information provided in concordance with current guidelines. For completeness and comprehensiveness, 18 items described the disease with respect to various topics such as etiology or therapy, among others. For technical elements, 27 items evaluated disclosure of authorship, sponsorship, affiliation, editorial process, feedback process, privacy, and data protection. For design and aesthetics, 10 items evaluated consistent layout and relevant graphics and images. For usability, 10 items evaluated ease of navigation and functionality of internal search engines. For accessibility, five items evaluated the availability of websites to people with audiovisual disabilities. For readability, three items evaluated conversational writing style and use of a readability tool to determine the reading level of the text. Conclusion: We identified standards for the development of online patient health information. This proposed instrument can serve as a guideline to develop and improve how health information is presented on the internet. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Context-aware grading of quality evidences for evidence-based decision-making.
- Author
-
Afzal, Muhammad, Hussain, Maqbool, Haynes, Robert Brian, and Sungyoung Lee
- Subjects
- *
COMPUTER simulation , *COMPUTER software , *CONCEPTUAL structures , *DATABASE searching , *DECISION support systems , *INFORMATION retrieval , *INFORMATION storage & retrieval systems , *MEDICAL databases , *RESEARCH methodology , *MEDICAL literature , *META-analysis , *METADATA , *QUALITY assurance , *RESEARCH funding , *USER interfaces , *EVIDENCE-based medicine , *DECISION making in clinical medicine , *DESCRIPTIVE statistics , *SOFTWARE analytics - Abstract
Processing huge repository of medical literature for extracting relevant and high-quality evidences demands efficient evidence support methods. We aim at developing methods to automate the process of finding quality evidences from a plethora of literature documents and grade them according to the context (local condition). We propose a two-level methodology for quality recognition and grading of evidences. First, quality is recognized using quality recognition model; second, context-aware grading of evidences is accomplished. Using 10-fold cross-validation, the proposed quality recognition model achieved an accuracy of 92.14 percent and improved the baseline system accuracy by about 24 percent. The proposed context-aware grading method graded 808 out of 1354 test evidences as highly beneficial for treatment purpose. This infers that around 60 percent evidences shall be given more importance as compared to the other 40 percent evidences. The inclusion of context in recommendation of evidence makes the process of evidence-based decision-making "situation-aware." [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Search Effectiveness in Nonredundant Sequence Databases: Assessments and Solutions.
- Author
-
Chen, Qingyu, Zhang, Xiuzhen, Wan, Yu, Zobel, Justin, and Verspoor, Karin
- Subjects
- *
BIOLOGICAL databases , *DATABASES , *DATABASE searching , *LEAD time (Supply chain management) , *FOOD recall , *INFORMATION retrieval - Abstract
Duplicate sequence records—that is, records having similar or identical sequences—are a challenge in search of biological sequence databases. They significantly increase database search time and can lead to uninformative search results containing similar sequences. Sequence clustering methods have been used to address this issue to group similar sequences into clusters. These clusters form a nonredundant database consisting of representatives (one record per cluster) and members (the remaining records in a cluster). In this approach, for nonredundant database search, users search against representatives first and optionally expand search results by exploring member records from matching clusters. Existing studies used Precision and Recall to assess the search effectiveness of nonredundant databases. However, the use of Precision and Recall does not model user behavior in practice and thus may not reflect practical search effectiveness. In this study, we first propose innovative evaluation metrics to measure search effectiveness. The findings are that (1) the Precision of expanded sets is consistently lower than that of representatives, with a decrease up to 7% at top ranks; and (2) Recall is uninformative because, for most queries, expanded sets return more records than does search of the original unclustered databases. Motivated by these findings, we propose a solution that returns a user-specified proportion of top similar records, modeled by a ranking function that aggregates sequence and annotation similarities. In experiments undertaken on UniProtKB/Swiss-Prot, the largest expert-curated protein database, we show that our method dramatically reduces the number of returned sequences, increases Precision by 3%, and does not impact effective search time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. DigChem: Identification of disease-gene-chemical relationships from Medline abstracts.
- Author
-
Kim, Jeongkyun, Kim, Jung-jae, and Lee, Hyunju
- Subjects
- *
BIOCHEMICAL genetics , *MEDICAL research , *GENES , *SHORT-term memory , *DEEP learning - Abstract
Chemicals interact with genes in the process of disease development and treatment. Although much biomedical research has been performed to understand relationships among genes, chemicals, and diseases, which have been reported in biomedical articles in Medline, there are few studies that extract disease–gene–chemical relationships from biomedical literature at a PubMed scale. In this study, we propose a deep learning model based on bidirectional long short-term memory to identify the evidence sentences of relationships among genes, chemicals, and diseases from Medline abstracts. Then, we develop the search engine DigChem to enable disease–gene–chemical relationship searches for 35,124 genes, 56,382 chemicals, and 5,675 diseases. We show that the identified relationships are reliable by comparing them with manual curation and existing databases. DigChem is available at . [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Errors in search strategies used in systematic reviews and their effects on information retrieval.
- Author
-
Salvador-Oliván, José Antonio, Marco-Cuenca, Gonzalo, and Arquero-Avilés, Rosario
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *MEDLINE , *ONLINE information services , *QUALITY assurance , *STRATEGIC planning , *SYSTEMATIC reviews , *HUMAN error , *DATA quality , *DATA analysis software - Abstract
Objectives: Errors in search strategies negatively affect the quality and validity of systematic reviews. The primary objective of this study was to evaluate searches performed in MEDLINE/PubMed to identify errors and determine their effects on information retrieval. Methods: A PubMed search was conducted using the systematic review filter to identify articles that were published in January of 2018. Systematic reviews or meta-analyses were selected from a systematic search for literature containing reproducible and explicit search strategies in MEDLINE/PubMed. Data were extracted from these studies related to ten types of errors and to the terms and phrases search modes. Results: The study included 137 systematic reviews in which the number of search strategies containing some type of error was very high (92.7%). Errors that affected recall were the most frequent (78.1%), and the most common search errors involved missing terms in both natural language and controlled language and those related to Medical Subject Headings (MeSH) search terms and the non-retrieval of their more specific terms. Conclusions: To improve the quality of searches and avoid errors, it is essential to plan the search strategy carefully, which includes consulting the MeSH database to identify the concepts and choose all appropriate terms, both descriptors and synonyms, and combining search techniques in the free-text and controlledlanguage fields, truncating the terms appropriately to retrieve all their variants. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. A failed attempt at developing a search filter for systematic review methodology articles in Ovid Embase.
- Author
-
Neilson, Christine and Mê-Linh Lê
- Subjects
- *
ABSTRACTING & indexing services , *BIBLIOGRAPHICAL citations , *BIBLIOGRAPHY , *DATABASE searching , *DATABASES , *INFORMATION retrieval , *MEDICAL information storage & retrieval systems , *RESEARCH methodology , *MEDICAL protocols , *SYSTEMATIC reviews , *DATA analysis , *LITERATURE reviews , *SOFTWARE architecture , *INFORMATION-seeking behavior , *CONTENT mining , *ACCURACY , *SOFTWARE analytics - Abstract
Objectives: This paper describes the development, execution, and subsequent failure of an attempt to create an Ovid Embase search filter for locating systematic review methodology articles. Methods: The authors devised a work plan, based on best practices, for search filter development that has been outlined in the literature. Three reference samples were gathered by identifying the OVID Embase records for specific articles that were included in the PubMed Systematic Review Methods subset. The first sample was analyzed to develop a set of keywords and subject headings to include in the search filter. The second and third samples would have been used to calibrate the search filter and to calculate filter sensitivity and precision, respectively. Results: Technical shortcomings, database indexing practices, and the fuzzy nature of keyword terminology relevant to the topic prevented us from designing the search filter. Conclusion: Creating a search filter to identify systematic review methodology articles in Ovid Embase is not possible at this time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. Automating the search for a patent’s prior art with a full text similarity search.
- Author
-
Helmers, Lea, Horn, Franziska, Biegler, Franziska, Oppermann, Tim, and Müller, Klaus-Robert
- Subjects
- *
FULL-text databases , *PATENT infringement , *PATENT applications , *KEYWORD searching , *NATURAL language processing - Abstract
More than ever, technical inventions are the symbol of our society’s advance. Patents guarantee their creators protection against infringement. For an invention being patentable, its novelty and inventiveness have to be assessed. Therefore, a search for published work that describes similar inventions to a given patent application needs to be performed. Currently, this so-called search for prior art is executed with semi-automatically composed keyword queries, which is not only time consuming, but also prone to errors. In particular, errors may systematically arise by the fact that different keywords for the same technical concepts may exist across disciplines. In this paper, a novel approach is proposed, where the full text of a given patent application is compared to existing patents using machine learning and natural language processing techniques to automatically detect inventions that are similar to the one described in the submitted document. Various state-of-the-art approaches for feature extraction and document comparison are evaluated. In addition to that, the quality of the current search process is assessed based on ratings of a domain expert. The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Does pre-activating domain knowledge foster elaborated online information search strategies? Comparisons between young and old web user adults.
- Author
-
Sanchiz, M., Amadieu, F., Fu, W.T., and Chevalier, A.
- Subjects
- *
ONLINE information services , *DATABASE searching , *INTERNET users , *INFORMATION theory , *MEANING (Psychology) , *AGE distribution , *COMPARATIVE studies , *INFORMATION retrieval , *INTELLECT , *INTERNET , *RESEARCH methodology , *MEDICAL cooperation , *RESEARCH , *SEMANTICS , *EVALUATION research , *INFORMATION-seeking behavior - Abstract
The present study aimed at investigating how pre-activating prior topic knowledge before browsing the web can support information search performance and strategies of young and older users. The experiment focus on analyzing to what extent prior knowledge pre-activation might cope with older users' difficulties when interacting with a search engine. 26 older (age 60 to 77) and 22 young (age 18 to 32) adults performed 6 information search problems related to health and fantastic movies. Overall, results showed that pre-activating prior topic knowledge increased the time spent evaluating the search engine results pages, fostered deeper processing of the navigational paths elaborated (and thus reduced the exploration of different navigational paths) and improved the semantic specificity of queries. Pre-activating prior knowledge helped older adults produced semantically more specific queries when they had lower prior-knowledge than young adults. Moderation analyses indicated that the pre-activation supported older adults' search performance under the condition that participants generated semantically relevant keywords during this pre-activation task. Implications of these results show that prior topic knowledge pre-activation may be a good lead to support the beneficial role of prior knowledge in older users' search behavior and performance. Recommendations for design pre-activation support tool are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Information Search Strategies among LIS Professionals: A Case Study of Selected Institutions in India.
- Author
-
Thanuskodi, S.
- Subjects
- *
ACADEMIC libraries , *DATABASE searching , *INFORMATION science , *ACQUISITION of data , *INFORMATION retrieval - Abstract
This study examined Information Search Strategies employed by Library and Information Science (LIS) professionals of selected institutions in India for research. Questionnaire was used as the main instrument for the gathering of data. Data collected were analyzed using simple frequency tables and mean. Search specialists can be found in libraries of all kinds, but are located especially in college and university libraries and in the information centre and other special libraries associated with business and industrial organizations, law firms and medical establishments. Some search specialists are freelance entrepreneurs, in business for themselves and actively marketing their services to special user populations. clients of online information retrieval search specialists include undergraduate and graduate students and faculty in academic libraries, and scientists, engineers, businessmen, doctors, lawyers, and many others using special libraries and information centres to help satisfy their information needs. The study revealed that most of the respondents belonging to various educational qualifications prefer 'their library catalogue', except the respondents belonging to 'UG in LIS' qualification. Most of the respondents (44.4%) belonging to 'UG in LIS' qualification prefer 'open access databases' to seek needed information, followed by 'their library catalogue' (22.2%). The findings of such study would put light on the important data and insight into the current state of practices of LIS professionals and their understanding about information searching process on internet. The outcome and suggestions of the study would be beneficial for them to take appropriate measures to improve their information search strategy skills. [ABSTRACT FROM AUTHOR]
- Published
- 2019
35. Search Strategies and the Relevance of Retrieved Information in Persian Articles Database: Survey of M.A Students of Shiraz University.
- Author
-
Moghaddaszadeh, Hassan
- Subjects
- *
DATABASE searching , *ANALYSIS of variance , *INFORMATION retrieval , *RESEARCH - Abstract
Retrieving relevant information on the Internet and identifying the related information to the real needs are not an easy task for many users. So the main objective of this study was to evaluate the effect of search strategies on the relevance of retrieved information in domestic article databases. Considering the nature of the subject, this was an applied descriptive-survey research. Statistical population consists of all domestic article databases, from which the MAGIRAN, IRANDOC, NOORMAGZ and the Regional Information Center for Science and Technology (RICeST) were selected as samples. To test the hypotheses, one-way analysis of variance (ANOVA) and Tukey's post-hoc test were computed using SPSS statistical software version 22. The study's findings showed that there were significant differences between relevance of the information retrieved from different databases based on different search strategies. It was found that, using simple search had the highest relevance. Moreover, using the AND, NOT and OR operators, took the lower ranks respectively. Using the time limiter had the lowest relevance in information retrieval. There were also significant differences between the relevance of information retrieved from different databases, and the NOORMAGZ database, the RICeST, MAGIRAN and IRANDOC; respectively had the most relevant retrievals. Using different search strategies can affect the relevance of the information retrieved from an article database. Therefore, acquiring these strategies and using each one in the right situation can improve the relevance of the retrieved information. [ABSTRACT FROM AUTHOR]
- Published
- 2019
36. Neo4j graph database realizes efficient storage performance of oilfield ontology.
- Author
-
Gong, Faming, Ma, Yuhui, Gong, Wenjuan, Li, Xiaoran, Li, Chantao, and Yuan, Xiangbing
- Subjects
- *
OIL fields , *INFORMATION processing , *INFORMATION retrieval , *BACK up systems , *GRAPH theory - Abstract
The integration of oilfield multidisciplinary ontology is increasingly important for the growth of the Semantic Web. However, current methods encounter performance bottlenecks either in storing data and searching for information when processing large amounts of data. To overcome these challenges, we propose a domain-ontology process based on the Neo4j graph database. In this paper, we focus on data storage and information retrieval of oilfield ontology. We have designed mapping rules from ontology files to regulate the Neo4j database, which can greatly reduce the required storage space. A two-tier index architecture, including object and triad indexing, is used to keep loading times low and match with different patterns for accurate retrieval. Therefore, we propose a retrieval method based on this architecture. Based on our evaluation, the retrieval method can save 13.04% of the storage space and improve retrieval efficiency by more than 30 times compared with the methods of relational databases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. What have we learned from the time trend of mass shootings in the U.S.?
- Author
-
Lin, Ping-I, Fei, Lin, Barzman, Drew, and Hossain, M.
- Subjects
- *
MASS shootings , *MENTAL illness statistics , *REGRESSION analysis , *STATISTICAL correlation - Abstract
Little is known regarding the time trend of mass shootings and associated risk factors. In the current study, we intended to explore the time trend and relevant risk factors for mass shootings in the U.S. We attempted to identify factors associated with incidence rates of mass shootings at the population level. We evaluated if state-level gun ownership rate, serious mental illness rate, poverty percentage, and gun law permissiveness could predict the state-level mass shooting rate, using the Bayesian zero-inflated Poisson regression model. We also tested if the nationwide incidence rate of mass shootings increased over the past three decades using the non-homogenous Poisson regression model. We further examined if the frequency of online media coverage and online search interest levels correlated with the interval between two consecutive incidents. The results suggest an increasing trend of mass shooting incidences over time (p < 0.001). However, none of the state-level variables could predict the mass shooting rate. Interestingly, we have found inverse correlations between the interval between consecutive shootings and the frequency of on-line related reports as well as on-line search interests, respectively (p < 0.001). Therefore, our findings suggest that online media might correlate with the increasing incidence rate of mass shootings. Future research is warranted to continue monitoring if the incidence rates of mass shootings change with any population-level factors in order to inform us of possible prevention strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Reference checking for systematic reviews using Endnote.
- Author
-
Bramer, Wichor M.
- Subjects
- *
BIBLIOGRAPHICAL citations , *BIBLIOGRAPHY , *DATABASE searching , *DATABASES , *INFORMATION retrieval , *LIBRARIES , *SYSTEMATIC reviews , *BIBLIOGRAPHIC databases , *TEACHING methods , *CITATION analysis - Abstract
In searches for systematic reviews, it is recommended that authors review references from the reference lists of retrieved relevant reviews for possible additional, relevant references. This process can be time consuming, since there often is overlap between the reference lists and the lists contain references that were already retrieved in the initial searches. The author proposes a method in which EndNote is used in combination with the Scopus or Web of Science databases to semi-automatically download these references into an existing EndNote library. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. A systematic approach to searching: an efficient and complete method to develop literature searches.
- Author
-
Bramer, Wichor M., de Jonge, Gerdien B., Rethlefsen, Melissa L., Mast, Frans, and Kleijnen, Jos
- Subjects
- *
DATABASE searching , *INFORMATION retrieval , *INFORMATION storage & retrieval systems , *SUBJECT headings , *SYSTEMATIC reviews , *BIBLIOGRAPHIC databases - Abstract
Creating search strategies for systematic reviews, finding the best balance between sensitivity and specificity, and translating search strategies between databases is challenging. Several methods describe standards for systematic search strategies, but a consistent approach for creating an exhaustive search strategy has not yet been fully described in enough detail to be fully replicable. The authors have established a method that describes step by step the process of developing a systematic search strategy as needed in the systematic review. This method describes how single-line search strategies can be prepared in a text document by typing search syntax (such as field codes, parentheses, and Boolean operators) before copying and pasting search terms (keywords and free-text synonyms) that are found in the thesaurus. To help ensure term completeness, we developed a novel optimization technique that is mainly based on comparing the results retrieved by thesaurus terms with those retrieved by the free-text search words to identify potentially relevant candidate search terms. Macros in Microsoft Word have been developed to convert syntaxes between databases and interfaces almost automatically. This method helps information specialists in developing librarian-mediated searches for systematic reviews as well as medical and health care practitioners who are searching for evidence to answer clinical questions. The described method can be used to create complex and comprehensive search strategies for different databases and interfaces, such as those that are needed when searching for relevant references for systematic reviews, and will assist both information specialists and practitioners when they are searching the biomedical literature. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Systematic review of the effects of agricultural interventions on food security in northern Ghana.
- Author
-
Adu, Michael Osei, Yawson, David Oscar, Armah, Frederick Ato, Abano, Ernest Ekow, and Quansah, Reginald
- Subjects
- *
FOOD security , *POVERTY , *SYSTEMATIC reviews , *AGRICULTURAL development , *CAPACITY building - Abstract
Background: Food insecurity and poverty rates in Ghana are highest in the districts from latitude 8° N upwards. These have motivated several interventions aimed at addressing the food insecurity via promoting agricultural growth. An assessment of the overall impact of these interventions on food security is necessary to guide policy design and future interventions. Methods and findings: A systematic review was used to assess the cumulative evidence of the effect of development interventions, implemented from 2006 to 2016 on food security, especially in Northern Ghana. Information were retrieved from over 20 Government and non-Governmental organisations through online search and actual visits. The number of studies included in systematic review was 22. The study showed that a large number of interventions have been implemented in Northern Ghana over the study period. Access to quality extension services, training and capacity building was a major intervention strategy. About 82% of studies considered increasing production but only 14% of the studies reported on changes in yield. About 42% of the included studies used market access as a strategy but about 44% reported increase in incomes of beneficiaries (with only seven studies providing numerical evidence for this claim). The ranking of frequency of intervention strategies was in the order extension and capacity building > production > postharvest value addition > water and irrigation facilities > storage facilities > input supply. A substantial number of the studies had no counterfactuals, weakening confidence in attributing impacts on food security for even the beneficiaries. Conclusions: It is concluded that evidence for impacts of the interventions on food security was weak, or largely assumed. A logical recommendation is the need for development partners to synchronise their measurement and indicators of food security outcomes. It is also recommended that some food security indicators are explicitly incorporated into intervention design while bearing in mind the potential need for counterfactuals. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Private Information Retrieval.
- Author
-
YEKHANIN, SERGEY
- Subjects
- *
INFORMATION storage & retrieval systems , *QUERY languages (Computer science) , *INFORMATION retrieval , *DATABASE searching , *COMPUTER users , *COMPUTER security software - Abstract
The article discusses several Private Information Retrieval (PIR) schemes that contain cryptographic protocols designed to protect database users' privacy of user queries to public databases. According to the article, PIR allows clients to retrieve records from public databases while hiding the records' identity from database owners. PIR computational scheme, also called information theoretic, offers a guarantee that each server participating in protocol execution doesn't receive any information about what users are after. Locally decodable codes and Hadamard codes are discussed.
- Published
- 2010
- Full Text
- View/download PDF
42. LOOKING UP DATA in P2P Systems.
- Author
-
Balakrishan, Hari, Kaashoek, M. Frans, Karger, David, Morris, Robert, and Stoica, Ion
- Subjects
- *
PEER-to-peer architecture (Computer networks) , *COMPUTER networks , *INFORMATION retrieval , *DATABASE searching , *COMPUTER network architectures , *INFORMATION resources management - Abstract
In this article, the authors discuss their own and other algorithms for performing distributed lookup, which is finding data associated with a key among networks of peers without a central authority. With clear asymptotic performance, designers can use these algorithms with assurance about scalability of the lookup function, versus the ad-hoc and sometimes failing or non-scaling approach in some grass-roots Peer-2-Peer (P2P) implementations. The task of using these algorithms is further eased with the authors' pointers to reference implementation toolkits. The research the current authors describe, provides a path for the development of new technologies to provide such desired attributes. The authors look at one of the problems raised by P2P computing, the lookup problem. The lookup problem is simple to state: finding a data item stored at some dynamic set of nodes in the system. The authors discuss the structured lookup method and the symmetric lookup strategy to overcome the lookup problem.
- Published
- 2003
- Full Text
- View/download PDF
43. BIAS ON THE WEB.
- Author
-
Mowshowitz, Abbe and Kawaguchi, Akira
- Subjects
- *
INTERNET searching , *PREJUDICES , *DATABASE searching , *INFORMATION retrieval , *WORLD Wide Web - Abstract
This article discusses the issue of search bias on the Internet. Biased search results on product information illustrated a general problem of considerable social importance. The statistical analyses described showed that the bias measure discriminates between search engines, but for most of the search engines bias does not depend on the subject domain searched, nor does it depend on the search terms chosen to represent the subject domain. These results support the contention that the measure of bias that was discussed is a useful tool for assessing search engine performance. Similar results on a range of subject domains and search terms would justify using the bias measure to benchmark search engine performance. Regardless of the utility of this particular measure, it is clear that bias on the Wen is a socially significant issue. The only realistic way to counter the ill effects of search engine bias on the ever-expanding Web is to make sure a number of alternative engines are available. Elimination of competition in the search engine business is just as problematic for a democratic society as consolidation in the new media. Both search engine companies and new media firms act as intermediaries between information sources and information seekers. Too few intermediaries spell trouble.
- Published
- 2002
- Full Text
- View/download PDF
44. FINDING THE FLOW IN WEB SITE SEARCH.
- Author
-
Hearst, Marti, Elliott, Ame, English, Jennifer, Sinha, Rashmi, Swearingen, Kirsten, and Ka-Ping Yee
- Subjects
- *
INTERNET searching , *WEB search engines , *DATABASE searching , *INFORMATION retrieval , *COMPUTER interfaces - Abstract
This article argues that designing a search system and interface may best be served and executed by scrutinizing usability studies. Unfortunately, most studies of search behavior are inconclusive about how to improve the system, but some consistencies do emerge about what works. This article summarizes which search features tend to work well, and which fail, in practice. Throughout this article, the assumption is that the user population consists of people who do not specialize in search and who have only basic knowledge of how to use computers. First and foremost, most users engaged in directed searchers are not interested in search for its own sake; thus systems that make users focus on the operations for performing search are seldom successful. Feature found to work well across studies are color highlighting of search terms in result listings; sorting of search results along criteria such as date and author; and grouping search results according to well-organized category labels. Certain features are helpful in principle, but only work in practice if the underlying algorithms are highly accurate and if the interface is carefully designed. Some examples of such features include spelling correction, automated term expansion, and simple relevance feedback, in which the user selects one item and the system shows items that are similar in scope along several dimensions.
- Published
- 2002
- Full Text
- View/download PDF
45. PERSONALIZED SEARCH.
- Author
-
Pitkow, James, Schütze, HInrich, Cass, Todd, Cooley, Rob, Turnbull, Don, Edmonds, Andy, Adar, Eytan, and Breuel, Thomas
- Subjects
- *
INTERNET searching , *DATABASE searching , *INFORMATION retrieval , *SEARCH engines , *WEB search engines - Abstract
This article explores a contextual computing approach to personalized Internet search efficiency. Contextual computing refers to the enhancement of a user's interactions by understanding the user, the context, and the applications and information being used, typically across a wide set of user goals. In this article, the evolution of the field of information retrieval (IR) was reviewed, setting the stage for examining how a search can be personalized, with particular emphasis on the Web. The Outride system was also described and a set of experiment was reviewed. The Outride system was designed to be a generalized architecture for the personalization of search across a variety of information ecologies. Outride, with eTesting Labs as an independent tester, designed a series of empirical tests to measure if the Outride system makes searches faster and easier to complete. A new type of IR system that personalizes the search experience for each user across their interactions was presented in this article. Initial evidence is also shown to support that contextualized computing approach toward the personalization of search is the next frontier toward significantly increasing search efficiency.
- Published
- 2002
- Full Text
- View/download PDF
46. EVOLVING DATA MINING INTO SOLUTIONS FOR INSIGHTS.
- Author
-
Fayyad, Usama and Uthurusamy, Ramasamy
- Subjects
- *
DATA mining , *INFORMATION retrieval , *DATABASE searching , *TECHNOLOGICAL innovations , *KNOWLEDGE management , *INFORMATION technology - Abstract
The capacity of digital data storage worldwide has doubled every nine months for at least a decade as of August 1, 2002. This increase in the speed of digital data storage is one of the reasons for the increasing importance and rapid growth in the field of data mining. The aggressive rate of growth of disk storage and the gap between Moore's Law and Storage Law growth trends represents a very interesting pattern in the state of technology evolution. Our ability to capture and store data has far outpaced our ability to process and utilize it. This growing challenge has produced a phenomenon we call the data tombs, or data stores that are effectively write-only. Data mining is defined as the identification of interesting structure in data. Structure designates patterns, statistical or predictive models of the data, and relationships among parts of the data. Each of these terms-patterns, models, and relationships has a concrete definition in the context of data mining. Data mining is primarily concerned with making it easy, convenient and practical to explore very large databases for organizations and users with lots of data but without years of training as data analysts.
- Published
- 2002
- Full Text
- View/download PDF
47. A new split based searching for exact pattern matching for natural texts.
- Author
-
Hakak, Saqib, Kamsin, Amirrudin, Shivakumara, Palaiahnakote, Idna Idris, Mohd Yamani, and Gilkar, Gulshan Amin
- Subjects
- *
GENETIC algorithms , *MOLECULAR biology , *TEXT processing (Computer science) , *IMAGE processing , *WEB search engines - Abstract
Exact pattern matching algorithms are popular and used widely in several applications, such as molecular biology, text processing, image processing, web search engines, network intrusion detection systems and operating systems. The focus of these algorithms is to achieve time efficiency according to applications but not memory consumption. In this work, we propose a novel idea to achieve both time efficiency and memory consumption by splitting query string for searching in Corpus. For a given text, the proposed algorithm split the query pattern into two equal halves and considers the second (right) half as a query string for searching in Corpus. Once the match is found with second halves, the proposed algorithm applies brute force procedure to find remaining match by referring the location of right half. Experimental results on different S1 Dataset, namely Arabic, English, Chinese, Italian and French text databases show that the proposed algorithm outperforms the existing S1 Algorithm in terms of time efficiency and memory consumption as the length of the query pattern increases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. A Reflection on the Applicability of Google Scholar as a Tool for Comprehensive Retrieval in Bibliometric Research and Systematic Reviews.
- Author
-
Houshyar, Mojgan and Sotudeh, Hajar
- Subjects
- *
OPEN data movement , *DATABASE searching , *ELECTRONIC information resource searching , *INTERNET searching , *INFORMATION retrieval - Abstract
Google Scholar has recently attracted great attentions as an open access multidisciplinary citation database, and a tool for retrieving scientific works for scientometricians and researchers. The present research intended to highlight the limitations brought about by efficiency policies of the search engine and its impact on the results available to users. To do so, it examined the accessibility of the retrieval results, through conducting 54 searches in this database. The results showed that the estimation of the results on the top of the first page returned by Google Scholar did not match that of the accessible results. Therefore, these statistics could not be accounted for to precisely determine the number of documents on a topic. Moreover, the results showed that although the subjects selected for the searches were very specific, the number of results for each search was very wide and exceeded the upper limit of 1,000 records authorized in Google Scholar for display. By limiting the searches to the title field, the number of the results was dramatically reduced. Since title is one of the most important representations of document contents in scientific and technical fields, this strategy can increase the precision of the results and thus the effectiveness of the retrievals. The investigation of the accessibility of the search results for the title field also showed that some documents, though scarce in number, were still inaccessible despite the fact that they were within the 1000-record limits. In addition, in title field search, some rare cases of duplicate records, incompatibilities between queries and documents were observed regarding the language of the documents and exact phrase search. The lack of automatic truncation in field searches was one of the most important issues necessitating the use of sophisticated search strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2018
49. Quality of flow diagram in systematic review and/or meta-analysis.
- Author
-
Vu-Ngoc, Hai, Elawady, Sameh Samir, Mehyar, Ghaleb Muhammad, Abdelhamid, Amr Hesham, Mattar, Omar Mohamed, Halhouli, Oday, Vuong, Nguyen Lam, Ali, Citra Dewi Mohd, Hassan, Ummu Helma, Kien, Nguyen Dang, Hirayama, Kenji, and Huy, Nguyen Tien
- Subjects
- *
FLOW charts , *META-analysis , *MEDICAL research , *STATISTICAL correlation , *REGRESSION analysis - Abstract
Systematic reviews and/or meta-analyses generally provide the best evidence for medical research. Authors are recommended to use flow diagrams to present the review process, allowing for better understanding among readers. However, no studies as of yet have assessed the quality of flow diagrams in systematic review/meta-analyses. Our study aims to evaluate the quality of systematic review/meta-analyses over a period of ten years, by assessing the quality of the flow diagrams, and the correlation to the methodological quality. Two hundred articles of “systematic review” and/or “meta-analysis” from January 2004 to August 2015 were randomly retrieved in Pubmed to be assessed for the flow diagram and methodological qualities. The flow diagrams were evaluated using a 16-grade scale corresponding to the four stages of PRISMA flow diagram. It composes four parts: Identification, Screening, Eligibility and Inclusion. Of the 200 articles screened, 154 articles were included and were assessed with AMSTAR checklist. Among them, 78 articles (50.6%) had the flow diagram. Over ten years, the proportion of papers with flow diagram available had been increasing significantly with regression coefficient beta = 5.649 (p = 0.002). However, the improvement in quality of the flow diagram increased slightly but not significantly (regression coefficient beta = 0.177, p = 0.133). Our analysis showed high variation in the proportion of articles that reported flow diagram components. The lowest proportions were 1% for reporting methods of duplicates removal in screening phase, followed by 6% for manual search in identification phase, 22% for number of studies for each specific/subgroup analysis, 27% for number of articles retrieved from each database, and 31% for number of studies included in qualitative analysis. The flow diagram quality was correlated with the methodological quality with the Pearson’s coefficient r = 0.32 (p = 0.0039). Therefore, this review suggests that the reporting quality of flow diagram is less satisfactory, hence not maximizing the potential benefit of the flow diagrams. A guideline with standardized flow diagram is recommended to improve the quality of systematic reviews, and to enable better reader comprehension of the review process. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. FusionHub: A unified web platform for annotation and visualization of gene fusion events in human cancer.
- Author
-
Panigrahi, Priyabrata, Jere, Abhay, and Anamika, Krishanpal
- Subjects
- *
GENE fusion , *SMALL interfering RNA , *SEARCH engines , *NON-coding RNA , *PUBLIC domain - Abstract
Gene fusion is a chromosomal rearrangement event which plays a significant role in cancer due to the oncogenic potential of the chimeric protein generated through fusions. At present many databases are available in public domain which provides detailed information about known gene fusion events and their functional role. Existing gene fusion detection tools, based on analysis of transcriptomics data usually report a large number of fusion genes as potential candidates, which could be either known or novel or false positives. Manual annotation of these putative genes is indeed time-consuming. We have developed a web platform FusionHub, which acts as integrated search engine interfacing various fusion gene databases and simplifies large scale annotation of fusion genes in a seamless way. In addition, FusionHub provides three ways of visualizing fusion events: circular view, domain architecture view and network view. Design of potential siRNA molecules through ensemble method is another utility integrated in FusionHub that could aid in siRNA-based targeted therapy. FusionHub is freely available at . [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.