esh_logo Language: R

Language: R


/ GSscraper | R package for scraping search results (including) DOIs) from Google Scholar

Google Scholar is one of the most commonly used resources by researchers for information retrieval on an every-day basis. Although designed for ‘lookup’ searches (i.e. finding one or more specific records), it may also be a useful additional resource in evidence reviews (including systematic reviews) for academic and grey literature.

  R  

/ PRISMA2020 | R package and ShinyApp for making PRISMA2020 flow diagrams

Flowcharts in evidence syntheses allow the reader to rapidly understand the core procedures used in a review and examine the attrition of irrelevant records throughout the review process. The PRISMA flow diagram published in 2009 describes the sources, numbers and fates of all identified and screened records in a review. PRISMA is currently in the final stages of a 2020 update, including a new version of the PRISMA flow diagram:

 

/ ROSESflowchart | R package and ShinyApp for making ROSES flow diagrams

Systematic reviews should be described in a high degree of methodological detail. The ROSES reporting standards call for a high level of reporting detail in systematic reviews and systematic maps. An integral part of the methodological description of a review is a flow diagram/chart.

 

/ bibfix | An R package and Shiny app for repairing and enriching bibliographic data

bibfix is an R package and Shiny app that helps users repair and enrich their bibliographic data. It does so through a suite of functions that request bibliographic data from the OpenAlex API.

  R  

/ citationchaser | An R package and Shiny app for forward and backward citations chasing in academic searching

In searching for research articles, we often want to obtain lists of references from across studies, and also obtain lists of articles that cite a particular study. In systematic reviews, this supplementary search technique is known as ‘citation chasing’: forward citation chasing looks for all records citing one or more articles of known relevance; backward ciation chasing looks for all records referenced in one or more articles.

  R  

/ EviAtlas | An R tool for systematic maps

Systematic Maps are, according to the Environmental Evidence Journal, “overviews of the quantity and quality of evidence in relation to a broad (open) question of policy or management relevance.” In simple terms, this means that documents are categorized according to the type, location, and publication information available for each work within a particular topic. Systematic maps are often used for environmental research, where it is particularly important to track the location of study sites. The spatial nature of a systematic map, particularly for environmental research, means that academics often use some kind of geographic map to analyze and present their information. Understanding the academic community’s familiarity with the R programming language, we built a webapp using R Shiny that could automate certain parts of creating a systematic map for environmental research.

  R  

/ greylitsearcher | An R package and Shiny app for systematic and transparent searching for grey literature

greylitsearcher is a web-based tool for performing systematic and transparent searches of organisational websites. You can use the tool to perform structured and transparent searches of websites using Google’s sitesearch functionality, which allows you to search across all pages of a given website.

  R  

/ metadat | Meta-analytic datasets for R

The metadat package contains a large collection of meta-analysis datasets. These datasets are useful for teaching purposes, illustrating/testing meta-analytic methods, and validating published analyses.

  R  

/ metafor automated reports | A function to summarize meta-analysis outputs

This function dynamically generates an analysis report (in html, pdf, or docx format) based on a model object. The report includes information about the model that was fitted, the distribution of the observed outcomes, the estimate of the average outcome based on the fitted model, tests and statistics that are informative about potential (residual) heterogeneity in the outcomes, checks for outliers and/or influential studies, and tests for funnel plot asymmetry. A forest plot and a funnel plot are also provided. References for all methods/analysis steps are also added to the report and cited appropriately. Additional functionality for reports based on meta-regression models will be incorporated soon. The function is part of the metafor package.

  R  

/ metaverse | Evidence synthesis workflows in R

Evidence synthesis (ES) is the process of identifying, collating and synthesising primary scientific research (such as articles and reports) for the purposes of providing reliable, transparent summaries. The goal of this project is to collect, integrate and expand the universe of available functions for ES projects in R, via our proposed metaverse package. Like tidyverse, metaverse is envisioned as a collector package that makes it straightforward to install a set of functions - currently located in separate packages - for a common purpose.

  R  

/ PredicTER | An Shiny app for predicting the time requirements of evidence reviews

A Tool to Predict the Time Needed to Conduct a Systematic Review or Systematic Map

  R  

/ robvis | Risk of bias assessments in R

robvis is an R package that allows users to quickly visualise risk-of-bias assessments performed as part of a systematic review. It allows users to created weighted bar-plots of the distribution of risk-of-bias judgements within each bias domain, in addition to “traffic light” plots of the specific domain-level judgements for each study. The resulting figures are formatted according the risk-of-bias assessment tool use to perform the assessments (currently supported tools are ROB-2, ROBINS-I and QUADAS-2). An associated Shiny app provides a user-friendly interface for the tool.

  R  

/ sysrevdata | R package for converting systematic review and map databases into different formats for human- and machine- readability

One of the most important steps in the process of conducting a systematic review or map is data extraction and the production of a database of coding, metadata and study data. There are many ways to structure these data, but to date, no guidelines or standards have been produced for the evidence synthesis community to support their production. On top of this, there is little adoption of easily machine-readable, readily reusable and adaptable databases: these databases would be easier to translate into different formats by review authors, for example for tabulation, visualisation and analysis, and also by readers of the review/map. As a result, it is common for systematic review and map authors to produce bespoke, complex data structures that, although typically provided digitally, require considerable efforts to understand, verify and reuse.

  R