esh_logo Projects

Software


/ GSscraper | R package for scraping search results (including) DOIs) from Google Scholar

Google Scholar is one of the most commonly used resources by researchers for information retrieval on an every-day basis. Although designed for ‘lookup’ searches (i.e. finding one or more specific records), it may also be a useful additional resource in evidence reviews (including systematic reviews) for academic and grey literature.

  R  

/ PRISMA2020 | R package and ShinyApp for making PRISMA2020 flow diagrams

Flowcharts in evidence syntheses allow the reader to rapidly understand the core procedures used in a review and examine the attrition of irrelevant records throughout the review process. The PRISMA flow diagram published in 2009 describes the sources, numbers and fates of all identified and screened records in a review. PRISMA is currently in the final stages of a 2020 update, including a new version of the PRISMA flow diagram:

 

/ ROSESflowchart | R package and ShinyApp for making ROSES flow diagrams

Systematic reviews should be described in a high degree of methodological detail. The ROSES reporting standards call for a high level of reporting detail in systematic reviews and systematic maps. An integral part of the methodological description of a review is a flow diagram/chart.

 

/ bibfix | An R package and Shiny app for repairing and enriching bibliographic data

bibfix is an R package and Shiny app that helps users repair and enrich their bibliographic data. It does so through a suite of functions that request bibliographic data from the OpenAlex API.

  R  

/ citationchaser | An R package and Shiny app for forward and backward citations chasing in academic searching

In searching for research articles, we often want to obtain lists of references from across studies, and also obtain lists of articles that cite a particular study. In systematic reviews, this supplementary search technique is known as ‘citation chasing’: forward citation chasing looks for all records citing one or more articles of known relevance; backward ciation chasing looks for all records referenced in one or more articles.

  R  

/ doi2txt | A tool to parse full text article sections based on a doi

Screening articles for evidence synthesis is typically done using titles and abstracts, however, full text screening can be more efficient because there is no guessing involved when assessing if the methods of a study match inclusion criteria. Full text of articles is also necessary for coding article metadata, such as study location or types of data collected, or for analysing article content, such as with topic modelling. doi2txt facilitates full text article screening, coding, and analysis by retrieving plain text versions of journal articles based on a doi, if available,or bibliographic information such as title and authors when a doi is not available or known. It also contains functions for processing full text articles, such as coding articles from an ontology oftopics, or geocoding articles to get actual or approximate latitude and longitude.

 

/ EviAtlas | An R tool for systematic maps

Systematic Maps are, according to the Environmental Evidence Journal, “overviews of the quantity and quality of evidence in relation to a broad (open) question of policy or management relevance.” In simple terms, this means that documents are categorized according to the type, location, and publication information available for each work within a particular topic. Systematic maps are often used for environmental research, where it is particularly important to track the location of study sites. The spatial nature of a systematic map, particularly for environmental research, means that academics often use some kind of geographic map to analyze and present their information. Understanding the academic community’s familiarity with the R programming language, we built a webapp using R Shiny that could automate certain parts of creating a systematic map for environmental research.

  R  

/ forestr | Interactive forest plots

Forestr is an online platform for interactive visualisations of forest plots.

  JS  

/ Grey Literature Reporter | Chrome plugin for grey literature searching

In evidence synthesis it is relatively easy to find published literature on a given topic, and to record what the search string was used in which database, how many records were found, how many were used or not used and why. It is much more difficult to be systematic in searching and recording grey literature.

 

/ greylitsearcher | An R package and Shiny app for systematic and transparent searching for grey literature

greylitsearcher is a web-based tool for performing systematic and transparent searches of organisational websites. You can use the tool to perform structured and transparent searches of websites using Google’s sitesearch functionality, which allows you to search across all pages of a given website.

  R  

/ metadat | Meta-analytic datasets for R

The metadat package contains a large collection of meta-analysis datasets. These datasets are useful for teaching purposes, illustrating/testing meta-analytic methods, and validating published analyses.

  R  

/ metafor automated reports | A function to summarize meta-analysis outputs

This function dynamically generates an analysis report (in html, pdf, or docx format) based on a model object. The report includes information about the model that was fitted, the distribution of the observed outcomes, the estimate of the average outcome based on the fitted model, tests and statistics that are informative about potential (residual) heterogeneity in the outcomes, checks for outliers and/or influential studies, and tests for funnel plot asymmetry. A forest plot and a funnel plot are also provided. References for all methods/analysis steps are also added to the report and cited appropriately. Additional functionality for reports based on meta-regression models will be incorporated soon. The function is part of the metafor package.

  R  

/ metaverse | Evidence synthesis workflows in R

Evidence synthesis (ES) is the process of identifying, collating and synthesising primary scientific research (such as articles and reports) for the purposes of providing reliable, transparent summaries. The goal of this project is to collect, integrate and expand the universe of available functions for ES projects in R, via our proposed metaverse package. Like tidyverse, metaverse is envisioned as a collector package that makes it straightforward to install a set of functions - currently located in separate packages - for a common purpose.

  R  

/ Paperweight | Using natural language processing to improve search queries

Paperweight, driven by a combination of natural language processing (NLP) algorithms. In the evidence synthesis process, the first steps typically require reviewers to manually build a database of articles and journals they want to summarize. This process entails an exhaustive search of Google Scholar using manually chosen keywords. This approach is vulnerable to bias since the reviewer might be more likely to find certain articles or journals in their review over other ones, depending on the selected search keywords. Tackling this problem, Paperweight seeks to remove the need for a reviewer to manually choose keywords to form their search queries.

  Python  

/ PDF annotation | Coding and extracting data from PDFs

Extraction of content from articles, also known as coding, is an important part of evidence synthesis, especially for meta-analysis that require coding of multiple predefined parameters that are to be extracted from articles. This task is usually tedious therefore multiple people, potentially including external helpers may involve in coding. Software tools that assist efficient content extraction and also enable indexing of extracted context against the field labels, are highly desirable.

JS  

/ PDF reference extraction | Extract reference lists from PDFs

Full-text PDFs are almost always the most reliable source of information from academic articles. Even though several resources allow for the extraction of data from full-text documents, most of the time the information is incomplete, inaccurate, or not available. PDFs were created to look great, not to extract data from. So, when you try to copy/paste from PDF you often get unexpected results. In this first version the project allows users to easily copy text from a PDF and attempts to automatically identify the references.

  Python  

/ PredicTER | An Shiny app for predicting the time requirements of evidence reviews

A Tool to Predict the Time Needed to Conduct a Systematic Review or Systematic Map

  R  

/ Reference completer | A tool to fill in missing information from incomplete references

Citations downloaded from bibliographic databases and other resources, such as Google Scholar, are often missing certain details like abstracts or volume/page details that are important for a variety of reasons, such as screening in systematic reviews or locating full text documents. This functionality is intended to be used for filling in missing information from a set of citation files, including abstracts.

  Python  

/ Integration & Research Weaving | Developing the ICASR Integration Engine

Automation tools are speeding up the conduct of evidence synthesis. However, the uptake of these tools amongst reviewers are slow. Potential barriers to use are; 1) the tools often operate in isolation, 2) reviewers need to manipulate their citation data into a specific format to use the tool, and 3) different tools require different levels of programming or computing expertise.

  JS  

/ robvis | Risk of bias assessments in R

robvis is an R package that allows users to quickly visualise risk-of-bias assessments performed as part of a systematic review. It allows users to created weighted bar-plots of the distribution of risk-of-bias judgements within each bias domain, in addition to “traffic light” plots of the specific domain-level judgements for each study. The resulting figures are formatted according the risk-of-bias assessment tool use to perform the assessments (currently supported tools are ROB-2, ROBINS-I and QUADAS-2). An associated Shiny app provides a user-friendly interface for the tool.

  R  

/ ROSES | A website to support reporting of systematic evidence syntheses

The ROSES forms were developed to improve the standards of evidence synthesis reporting and the transparency of the methods used for reviews and maps. The website (www.roses-reporting.com) aims to help adoption of the ROSES forms as well as improve the barrier to entry on using the ROSES forms in the least obtrusive way possible. In this project we’ve been working to improve the user experience, increasing efficiency and helping reviewers get the most out of the forms and website.

JS  

/ Search Strategist | A web tool to test and improve search strategies

Defining a good search strategy for systematic reviews can be a particularly challenging task. Some of the problems encountered are: when asking two people for a strategy they will get totally different outputs, the number of hits is prohibitively high, there are missing relevant references because a specific keyword was omitted, few means of validating search strategies exist, it is difficult to adapt the strategy for other databases, errors may be introduced when adapting strategies between databases, etc.

Python  

/ sysrevdata | R package for converting systematic review and map databases into different formats for human- and machine- readability

One of the most important steps in the process of conducting a systematic review or map is data extraction and the production of a database of coding, metadata and study data. There are many ways to structure these data, but to date, no guidelines or standards have been produced for the evidence synthesis community to support their production. On top of this, there is little adoption of easily machine-readable, readily reusable and adaptable databases: these databases would be easier to translate into different formats by review authors, for example for tabulation, visualisation and analysis, and also by readers of the review/map. As a result, it is common for systematic review and map authors to produce bespoke, complex data structures that, although typically provided digitally, require considerable efforts to understand, verify and reuse.

  R  

/ Thallo Evidence Mapping | A Jekyll Theme for Dataset Visualisation

This project provides an easy-to-use template for web visualisations of environmental evidence maps. Thalloo is a combination of map components and a Jekyll theme that enable quick, simple, and customisable deployment of a web-based tool to display evidence maps. The framework has the following features: i) Visual clustering and display of categorical data. Given a display category (e.g. crop, commodity), and a custom colour palette, points are displayed on a map. Depending on the zoom level and extent, points are clustered dynamically for best display. Any cluster can be selected to see the full metadata about the evidence points it contains; ii) Filtering. Data can be filtered by property in real time, using multiple filters within a property, and using multiple properties to filter; iii) Slicing of dimensionality. Given continuous data (e.g. publication year, time, or an effect size), the map allows real time ‘slicing’ of the dataset along one or many dimensions; iv) Abstract and funding logos. Provide attribution to your funders and partner institutions by including their logos at the top of your map view.

  JS  




Discussions

ESHToolsBlog | User-focused blog series

The Evidence Synthesis Technology world can be complicated - tools can be hard to find, it can be difficult to know what skill level is needed, and they can involve a considerable learning curve. Here at ESH, we are trying to lower the costs associated with finding, learning and using new ESTech tools and frameworks. This blog series aims to introduce key tools that can help increase transparency, accessibility, efficiency and rigour in evidence syntheses. Each month, a member of the ESH family will introduce their ESTech, explaining its purpose, the background skills needed, and how it can be used to support evidence synthesis.

 

Limitations and biases of commercial bibliographic databases | Proposed academic paper

Reliable evidence synthesis requires access to a comprehensive, unbiased body of literature that can be searched for relevant information. Systematic reviewers typically search multiple (upwards of 10) bibliographic databases to identify sets of search results that might yield relevant results. Access to these databases is often restrictively expensive, hampering efforts to synthesise evidence by smaller organisations and groups from low- and middle- income countries, for example. When reviewers export references from these databases they must typically do so in small batches (this supposedly stops people from replicating commercial databases for profit): for Web of Science this must be done in batches of 500, which can add considerable time to a review with 20,000 search results or more! Finally, databases such as Web of Science exacerbate publication bias by selecting journals and publishers that are perceived to be of ‘high impact’, for example using citation indices. So, these resources may be expensive, hard to use, and offer a biased selection of evidence. In order to facilitate evidence synthesis and to reduce bias in how information is indexed and found, we call for the production of an Open Source, Open Access on-stop-shop database that catalogues all known academic research. Since tables of contents are freely available online, technology exists that can produce such an important and useful tool.

 

Evidence Synthesis training resources | Ongoing project

Training resources for evidence synthesis are disparate and often hard to find. This project aims to collate and catalogue training resources for evidence synthesis methods from across disciplines, sectors, sources and formats, making it easier to find the right training. The result will be a web-based user-support tool.

 

A new ecosystem for evidence synthesis | Published academic paper

The number of publications has been increasing exponentially, and as a result, so has the research field of evidence synthesis. Consequently, there is now a need to maintain the quality, currency and credibility of evidence synthesis approaches. Within this commentary, we provide a vision for evidence synthesis as a fundamental tool for generating and guiding decision-making. This paper is aimed at all stakeholders, including researchers, institutions, and the broader general community.

 

ESMARConf | Evidence Synthesis and Meta-Analysis in R Conference

ESMARConf is a FREE, online annual conference series dedicated to evidence synthesis and meta-analysis in R. Our aim is to raise awareness of the utility of Open Source tools in R for conducting all aspects of evidence syntheses (systematic reviews/maps, meta-analysis, rapid reviews, scoping reviews, etc.), to build capacity for conducting rigorous evidence syntheses, to support the development of novel tools and frameworks for robust evidence synthesis, and to support a community of practice working in evidence synthesis tool development. ESMARConf began in 2021 and is coordinated by the Evidence Synthesis Hackathon.

 

ESTech special series | Special series of published papers

As the appreciation and need for timely and rigorous evidence synthesis continue to grow, so too will the need for tools and frameworks to conduct reviews of expanding evidence bases in an efficient and time-sensitive manner. Efforts to future-proof evidence synthesis through such Evidence Synthesis Technology (ESTech) have so far been isolated across interested individuals or groups, with no concerted effort to collaborate or build communities of practice in technology production.

 

An assessment of data and code availability in reviews | Ongoing project

Transparency is vital for repeatability and verifiability of systematic reviews, as with all other research. Authors of quantitative systematic reviews (i.e. ones that include meta-analysis) can maximise transparency by publishing the data they collected and the analytic code describing and documenting their analyses. This project will conduct a map of evidence syntheses to assess the degree to which data and code used in meta-analyses are made public by authors and journals publishing systematic reviews in environmental science and healthcare.

 

R for Evidence Synthesis | Proposed academic paper

R is a widely-used, open source programming language and statistical environment. Users are able to contribute add-ons to R functionality in a standardised way by developing new software ‘packages’. However, identifying which packages are most useful for a specific task can be challenging, particularly for evidence synthesis (ES) projects which typically include a number of discrete tasks, many using packages that may have been designed for other purposes. Consequently, a valuable tool for future researchers (and hackathons) would be a ‘map’ of available software packages, showing how those packages apply to ES. This would help guide new users through effective workflows, as well as identifying parts of the evidence synthesis process that are currently well supported in R, or conversely, in need of further software development. This project is currently in the data collection phase, wherein participants systematically search for R packages of potential value to ES projects and catalogue their findings in a structured way. The intended output is an academic article describing our findings, linked to a live database of R packages, the functions they contain, and the specific ES tasks that they each solve.

 

Standardised data files for systematic reviews | Ongoing project

Systematic reviews are complex, laborious tasks that produce vast amounts of data. However, the effort required to produce these data are typically lost once a review is completed: some information is reported in the review, but often information is missing or specific details are lacking.

 

Making primary research synthesis ready | Proposed academic paper

Evidence synthesis relies on primary research that is reliable, transparent, and where key information is readily accessible and useful for broader synthesis. We propose a succinct list of ideal attributes that primary research articles should report as standard so that they are more likely to be found and included in evidence syntheses. We discuss how implementing these changes to primary research reporting might be incentivised for authors, peer reviewers, editors, journals, and institutions such changes and this broad across medicine, environment, ecology, and social science disciplines. We focus on the area of prevention science, but our conclusions are applicable across disciplines and fields.