Date | Time (GMT) | Session | Speakers | Description |
---|---|---|---|---|
21 Feb | 08:00-17:00 | Workshop 1 | Wolfgang Viechtbauer | Introduction to meta-analysis in R We will start by looking at methods for quantifying the results from individual studies included in a meta-analysis in terms of various effect size or outcome measures (e.g., raw or standardized mean differences, ratios of means, risk/odds ratios, risk differences, correlation coefficients). We will then delve into methods for combining the observed outcomes (i.e., via equal- and random-effects models) and for examining whether the outcomes depend on the characteristics of the studies from which they were derived (i.e., via meta-regression and subgrouping). A major problem that may distort the results of a meta-analysis is publication bias (i.e., when the studies included in a meta-analysis are not representative of all the relevant research that has been conducted on a particular topic). Therefore, current methods for detecting and dealing with publication bias will be discussed next. Finally, time permitting, we will look at some advanced methods for meta-analysis to handle more complex data structures that frequently arise in practice, namely when studies contribute multiple effect sizes to the same analysis, leading to dependencies in the data that need to be accounted for (via multilevel/multivariate models and robust variance estimation). Find out more info here |
10:30-12:30 | Workshop 2 | Alison Bethel | Searching for studies in meta-analyses and evidence syntheses This workshop will provide an overview of why searching for studies in a meta-analysis or other evidence synthesis is a vital step that should be carefully planned and conducted. It will highlight methods that can be used to improve comprehensiveness, reduce risk of bias, and increase your working efficiency. Register here |
|
22 Feb | 09:30-10:30 | Opening | Neal Haddaway | Welcome and notices |
Terri Pigott | Keynote presentation [title and summary to be announced] |
|||
10:45-11:45 | Review processes from A to Z (part 1) | Bronwen Hunter | Using state-of-the-art transformer models to automate text classification in R The utilisation of automated classification tools from the field of Natural Language Processing (NLP) can massively decrease the amount of time required for the article screening stage of evidence synthesis. To achieve high accuracy, models often require huge volumes of ‘gold-standard’ labelled training data, which can be expensive and time-consuming to produce. As a result, ‘transfer learning’, in which NLP models, pre-trained on large corpora, are downloaded and finetuned on a smaller number of hand-labelled texts, is an increasingly popular method for achieving high-performance text classification. The availability of state-of-the-art transformer models via the open source ‘hugging face’ library has also improved the accessibility of this approach. However, materials outlining how to make use of such resources in R are limited. At ESMARCONF 2022, I will introduce and demonstrate how transfer learning can be carried out in R and seamlessly integrated with data collection from academic databases and internet sources. |
|
Elina Takola | Towards an automated Research Weaving "We here present a systematic study on the concept of ecological niche. Ecological niche has been described in various ways; from habitat to role and from biotope to hypervolume. Although it has many different definitions, it remains one of the most fundamental concepts in Ecology. Our aim is to implement the Research Weaving framework on a large body of literature, relevant to the ecological niche in order to illustrate how this concept evolved since its introduction in the early 20th century. We analysed over 29,000 publications using systematic maps and bibliometric webs. Our synthesis consisted of 8 components: phylogeny, type/validity, temporal trends, spatial patterns, contents, terms, authors, citations. We used bibliometric analyses, quantitative analyses of publication metadata and text mining algorithms. This integrative presentation of the development of the ecological niche concept provides an overview of how dynamics changed over time. It also allows us to detect knowledge gaps, while presenting a systematic summary of existing knowledge. To our knowledge, this is one of the first projects that implements the research weaving framework using exclusively automated processes. " |
|||
Neal Haddaway | citationchaser: a tool for transparent and efficient forwards and backwards citation chasing in systematic searching Systematic searching aims to find all possibly relevant research records from multiple sources to collate an unbiased and comprehensive set of bibliographic records. Along with bibliographic databases, systematic reviewers use a variety of additional methods to minimise procedural bias, including assessing records that are cited by and that cite a set of articles of known relevance (citation chasing). Citation chasing exploits connections between research articles to identify relevant records for consideration in a review by making use of explicit mentions of one article within another. Citation chasing is a popular supplementary search method because it helps to build on the work of primary research and review authors. It does so by identifying potentially relevant studies that might otherwise not be retrieved by other search methods; for example, because they did not use the review authors’ search terms in the specified combinations in their titles, abstracts or keywords. Here, we describe an open source tool that allows for rapid forward and backward citation chasing. We introduce citationchaser, an R package and Shiny app for conducting forward and backward citation chasing from a starting set of articles. We describe the sources of data, the backend code functionality, and the user interface provided in the Shiny app. |
|||
Joshua Polanin | An Evidence Gap Map Shiny Application for Effect Size or Summary Level Data Evidence Gap Maps (EGMs) provide a structured visual framework designed to identify areas where research has been conducted, and where research has not been conducted. Traditional EGMs combine at least two characteristics—e.g., outcome measurement, research design—mapped onto x-axis and y-axis to form a grid. EGMs can be in table, graph, or chart format. The intersections of the axes on the grid, at minimum, contain information on the number of studies conducted for the combination of the levels of the characteristics. We created this Shiny app to ease the construction of EGMs, in the form of a graph. The app allows users to upload their dataset, and use point-and-click options to summarize data for combinations of factors, and then create an EGM using the ggplot2 package in R (Wickham, 2011). We also provide an example dataset for instructional purposes. Further, the app will output R syntax used to create the plot; users can download the syntax and customize the graph if needed. |
|||
12:00-13:00 | Review processes from A to Z (part 2) | Steph Zimsen | Automating data-cleaning and documentation of extracted data using interactive R-markdown notebooks At the Institute for Health Metrics and Evaluation, we conduct ~40 systematic reviews each year. In our general process to search > screen > extract > analyze, we found we need an intervening step: cleaning extracted data before analysis. The problem arises from a feature of our workflow: one person extracts the data, while another analyzes. Clean-up falls through the gap as we hand off data. Analysts must then spend time cleaning, though the extractor is far more familiar with the dataset. To work faster with fewer errors, we developed a stepwise cleaning checklist, then wrote code modules to fix common problems. But juggling Excel and R and a checklist still takes time and attention. To streamline further, we are developing a systematic solution: an interactive R-markdown notebook to take in parameters of the specific extraction dataset; clean and validate the data; and return a new cleaned dataset. We are testing with a recent systematic review dataset of ~2800 observations from >150 sources. This semi-automated interactive code has other benefits besides valid, upload-ready analysis data. First, a flexible, parameterized template enables faster work, easily repeated. Also, the code can reproducibly make documentation of cleaning done, or extraction history, or other reports on data, parameters, and results. And critically, an interactive notebook makes sophisticated coding accessible to data extractors, who tend to have less coding experience than research analysts. |
|
Charis Wong | Developing a systematic framework to identify, evaluate and report evidence for drug selection in motor neuron disease clinical trials. Motor neuron disease (MND) is a rapidly progressive, disabling and incurable disease with an average of time to death between 18-30 months from diagnosis. Despite decades of clinical trials, effective disease modifying treatment options remain limited. Motor Neuron Disease – Systematic Multi-Arm Adaptive Randomisation Trial (MND-SMART; ClinicalTrials.gov registration number: NCT04302870) is an adaptive platform trial aimed at testing a pipeline of candidate drugs in a timely and efficient way. To inform selection of future candidate drugs to take to trial, we identify, evaluate and report evidence from (i) published literature via Repurposing Living Systematic Review (ReLiSyR-MND), a machine learning assisted, crowdsourced, three-part living systematic review evaluating clinical literature of MND and other neurodegenerative diseases which may share similar pathways, animal in vivo MND studies and in vitro MND studies, (ii) experimental drug screening including high throughput screening of human induced pluripotent stem cell based assays, (iii) pathway and network analysis, (iv) drug and trial databases, and (v) expert opinion. Our workflow implements automation and text mining techniques for evidence synthesis, and uses R shiny to provide interactive, curated living evidence summaries to guide decision making. |
|||
Vicente Ramirez | Sniffing though the Evidence: Leveraging Shiny to Conduct Meta Analysis on COVID-19 and Smell Loss Early in the coronavirus pandemic, scientists sought to understand the symptoms associated with COVID-19. Among those most frequently reported was the loss of the sense of taste and smell. To estimate the prevalence of smell loss, we conducted a meta-analysis. However, the dissemination of new literature necessitated that we continue to track and update our analysis. To address this issue, we leveraged the ability of R shiny applications to update and disseminate our analysis. From June 2020 to May 2021, our web-based dashboard provided the public with daily analysis updates, which estimated the prevalence of smell loss. This approach proved to be an effective method of disseminating findings to our field's broader community. While the coronavirus pandemic is an exceptional example of rapid updates to the literature, the framework presented may apply to several other fields and topics. |
|||
13:30-14:30 | Graphical user interfaces | Kyle Hamilton, Rob Crystal-Ornelas (moderators) | User Interfaces (UIs) can provide researchers with new ways to both learn about and conduct their own meta-analyses and evidence syntheses. In this session, we will bring together researchers for an overview of some UIs available for doing meta-analysis and evidence syntheses in R. We will also have a discussion aimed at identifying how UIs can help evolve the future of synthesis based on R tools. | |
Thomas Trikalinos | [Title tba] [Summary tba] |
|||
Mathias Harrer | [Title tba] [Summary tba] |
|||
15:30-17:30 | Workshop 3 | Matthew Grainger | Collaborative coding and version control - an introduction to Git and GitHub This workshop will provide walkthroughs, examples and advice on how to use GitHub to support your work in R, whether developing packages or managing projects. Register here |
|
23 Feb | 08:00-10:00 | Workshop 4 | Ruth Garside | The Collaboration for Environmental Evidence and what it can do for you This workshop focuses on what the Collaboration for Environmental Evidence (CEE), a key non-profit systematic review coordinating body, can provide by way of support to anyone wishing to conduct a robust evidence synthesis in the field of environmental science, conservation, ecology, evolution, etc. The workshop will involve a presentation of the organisation, its role and free services and support, followed by a Q and A. Register here |
10:00-11:00 | Quantitative synthesis - NMA | Silvia Metelli | NMAstudio: a fully interactive web-application for producing and visualizing network meta-analyses Several software tools have been developed in the last years for network meta-analysis (NMA) but presentation and interpretation of findings from large networks of interventions remain challenging. We developed a novel online tool, called ‘NMAstudio’, for facilitating the production and visualization of key NMA outputs in a fully interactive environment. NMAstudio is a Python web-application that provides a direct connection between a customizable network plot and all NMA outputs. The user interacts with the network by clicking one or more nodes-treatments or edges-comparisons. Based on their selection, different outputs and information are displayed: (a) boxplots of effect modifiers assisting the evaluation of transitivity; (b) pairwise or NMA forest plots and bi-dimensional plots if two outcomes are given; (c) league tables coloured by risk of bias or confidence ratings from the CINeMA framework; (d) incoherence tests; (e) comparison-adjusted funnel plots; (f) ranking plots; (g) evolution of the network over time. Pop-up windows with extra information are enabled. Analyses are performed in R using ‘netmeta’ and results are transformed to interactive and downloadable visualizations using reactive Python libraries such as ‘Plotly’ and ‘Dash’. A network of 20 drugs for chronic plaque psoriasis is used to demonstrate NMAstudio in practice. In summary, our application provides a truly interactive and user-friendly tool to display, enhance and communicate the NMA findings. |
|
Virginia Chiocchia | The ROB-MEN Shiny app to evaluate risk of bias due to missing evidence in network meta-analysis We recently proposed a framework to evaluate the impact of reporting bias on the meta-analysis of a network of interventions, which we called ROB-MEN (Risk Of Bias due to Missing Evidence in Network meta-analysis). In this presentation we will show the ROB-MEN Shiny app which we developed to facilitate this risk of bias evaluation process. ROB-MEN first evaluates the risk of bias due to missing evidence for each pairwise comparison separately. This step considers possible bias due to the presence of studies with unavailable results and the potential for unpublished studies. The second step combines the overall judgements about the risk of bias in pairwise comparisons with the percentage contribution of direct comparisons on the network meta-analysis (NMA) estimates, the likelihood of small-study effects, and any bias from unobserved comparisons. Then, a level of “low risk”, “some concerns” or “high risk” of bias due to missing evidence is assigned to each estimate. The ROB-MEN Shiny app runs the required analysis, semi-automates some of the steps and built-in algorithm to assign the overall risk of bias level for the NMA estimates and produces the tool’s output tables. We will present how the ROB-MEN app works using an illustrative example from a published NMA. ROB-MEN is the first tool for assessing the risk of bias due to missing evidence in NMA and is also incorporated in the reporting bias domain of the CINeMA software for evaluating the confidence in the NMA results. |
|||
Clareece Nevill | Development of a Novel Multifaceted Graphical Visualisation for Treatment Ranking within an Interactive Network Meta-Analysis Web Application "Network meta-analysis (NMA) compares the effectiveness of multiple treatments simultaneously. This project aimed to develop novel graphics within MetaInsight (interactive NMA web-app: crsu.shinyapps.io/MetaInsight) to aid assessment of the ‘best’ intervention(s). The most granular results are from Bayesian rank probabilities and can be visualised with (cumulative) rank-o-grams. Summary measures exists, however, simpler measures (e.g. probability best) may be easier to interpret but are often more unstable and don’t encompass the whole analysis. Surface under the cumulative ranking curve (SUCRA) is popular, directly linking with cumulative rank-o-grams. A critical assessment of current literature regarding ranking methodology and visualisation directed the creation of graphics in R using ‘ggplot’ and ‘shiny’. The Litmus Rank-O-Gram presents a cumulative rank-o-gram alongside a ‘litmus strip’ of SUCRA values acting as a key. The Radial SUCRA plot presents SUCRA values for each treatment radially with a network diagram of evidence overlaid. To aid interpretation and facilitate sensitivity analysis, the new graphics are interactive and presented alongside treatment effect and study quality results. Treatment ranking is powerful and should be interpreted cautiously with transparent, all-encompassing visualisations. This interactive tool will be pivotal for improving how researchers and stakeholders use and interpret ranking results." |
|||
Tasnim Hamza | crossnma: A new R package to synthesize cross-design evidence and cross-format data Network meta-analysis (NMA) is commonly used to compare between interventions simultaneously by synthesising the available evidence. That evidence is obtained either from non-randomized studies (NRS) or randomized controlled trials and is accessible as individual participant data (IPD) or aggregate data (AD). We have developed a new R package, crossnma, which allows us to combine these different pieces of information while accounting for their differences. The package conducts a Bayesian NMA and meta-regression to synthesize cross-design evidence and cross-format data. It runs a range of models with JAGS by generating the code automatically from user’s input. A three-levels hierarchical model is implemented to combine IPD and AD and we also integrate four different models for combining the different study designs (a) ignoring their differences in risk of bias (b) using NRS to construct discounted treatment effect priors (c,d) adjusting for the risk of bias in each study in two different ways. Up to three study- or patient-level covariates can also be included, which may help explaining some of the heterogeneity and inconsistency across trials. TH and GS are supported by the HTx-project. The HTx project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement Nº 825162. |
|||
11:30-12:30 | Other quantitative synthesis | Constantin Yves Plessen | What if…? A very short primer on conducting multiverse meta-analyses in R Even though conventional meta-analyses provide an overview of the published literature, they do not consider different paths that could have been taken in selecting or analyzing the data. At times, multiple meta-analyses with overlapping research questions reach different conclusions due to differences in inclusion and exclusion criteria, or data analytical decisions. It is therefore crucial to evaluate the influence such choices might have on the result of each meta-analysis. Was the meta-analytical method and exclusion criteria decisive, or is the same result reached via multiple analytical strategies? What if a meta-analyst would have decided to go a different path—would the same outcome occur? Ensuring that the conclusions of a meta-analysis are not disproportionately influenced by data analytical decisions, a multiverse meta-analysis can provide the entire picture and underpin the robustness of the findings—or lack thereof—by conducting multiple, namely all possible and reasonable meta-analyses at once. Hereby, multiverse meta-analyses provide a research integration like umbrella reviews yet additionally investigate the influence flexibility in data analysis could have on the resulting summary effect size. During the talk I will give insight into this potent method, and run through the multiverse of meta-analyses on the efficacy of psychological treatments for depression as an empirical example. |
|
Megha Joshi | wildmeta: Cluster Wild Bootstrapping for Meta-Analysis Evidence synthesists are often interested in whether certain features of samples, interventions, or study designs are systematically associated with the strength of intervention effects. In the framework of meta-analysis, such questions can be examined through moderator analyses. In practice, moderator analyses are complicated by the fact that meta-analytic data often include multiple dependent effect sizes per primary study. A common method to handle dependence, robust variance estimation (RVE), leads to excessive false positive results when the number of studies is small. Small-sample corrections for RVE have been proposed but they have low power, especially for multiple-contrast hypothesis tests (e.g., tests for whether average effects are equal across three different types of studies). Joshi, Pustejovsky & Beretvas (2021) examined an alternative method for handling dependence, cluster wild bootstrapping. The paper showed through simulation studies that cluster wild bootstrapping maintained adequate rates of false positive results while providing more power compared to existing small sample correction methods. In this presentation, I will introduce a package, called wildmeta, that implements cluster wild bootstrapping particularly for meta-analysis. The presentation will cover when and why meta-analysts should use cluster wild bootstrapping and, how to use the functions in the package with robumeta and metafor models. |
|||
Alex Nicol-Harper | Using sub-meta-analyses to maintain independence among spatiotemporally-replicated demographic datasets We use population modelling to inform conservation for the common eider, a well-studied seaduck of the circumpolar Northern Hemisphere. Our models are parameterised by vital rates measuring survival and reproduction, which we collated through lit review and a call for data. We performed precision-weighted meta-analysis (Doncaster & Spake, 2018) for vital rates with >20 independent estimates: adult annual survival, clutch size (number of eggs laid) and hatching success (proportion of eggs producing hatchlings). We excluded estimates without associated sample size, and included variance estimates where provided/calculable, otherwise inputting the imputed mean variance. Random-effects error structure allowed for likely variation in population means across this species' wide range; however, all I2 values were less than 1%, suggesting that most between-study variation was due to chance rather than true heterogeneity. In many cases, studies presented multiple estimates for a given vital rate – e.g. over different study areas and/or multiple years. Where appropriate, we conducted sub-meta-analyses to generate single estimates which could be handled equivalently to non-disaggregated estimates from other studies. These decisions align with the suggestions of Mengersen et al. (2013) and Haddaway et al. (2020) for maintaining independence among heterogeneous samples, and our workflow ensured that the overall meta-analysis was conducted on independent replicate observations for each vital rate. |
|||
Maria Llambrich | A new approach for meta-analysis using overall results: Amanida The combination, analysis and evaluation of different studies which try to answer or solve the same scientific question, also known as a meta-analysis, plays a crucial role in answering relevant clinical relevant questions. Unfortunately, metabolomics studies rarely disclose all the statistical information needed to perform a meta-analysis in a traditional manner. Public meta-analysis tools can only by applied to data with standard deviation or directly to raw data. Currently there is no available methodology to do a meta-analysis based on studies that only disclose overall results. Here we present Amanida as a meta-analysis approach using only the most reported statistical parameters in this field: p-value and fold-change. The p-values are combined via Fisher’s method and fold-changes are combined by averaging, both weighted by the study size (n). The amanida package includes several visualization options: a volcano plot for quantitative results, a vote plot for total regulation behaviors (up/down regulations) for each compound, and a explore plot of the vote-counting results with the number of times a compound is found upregulated or downregulated. In this way, it is very easy to detect discrepancies between studies at a first glance. Now we have developed a Shiny app to perform meta-analysis using Amanida approach and make it more accessible for the community. |
|||
13:00-14:00 | Quantitative synthesis with a Bayesian lens | Enzo Cerullo | MetaBayesDTA: Codeless meta-analysis of test accuracy, with or without a gold standard Background: Methods for the meta-analysis of test accuracy have historically required specialised statistical knowledge to implement. Recently, web applications have emerged which have made these methods more broadly accessible, such as MetaDTA (https://crsu.shinyapps.io/dta_ma/). This implements the so-called "bivariate" model and other features, such as a plot which displays quality assessment results alongside the analytical results. Another application is BayesDTA (https://bayesdta.shinyapps.io/meta-analysis/), which does not assume a perfect gold standard. Methods: We sought to create an extended, Bayesian version of MetaDTA, which (i) addresses several key limitations of existing apps, (ii) is accessible to researchers who do not have the background required to fit such models, and (iii) is suitable for other statisticians to use. We created the app in R using Shiny, and other packages such as rstan. Results: The app addresses several key limitations of other apps. For the bivariate model, one can conduct subgroup analysis and univariate meta-regression. For the imperfect gold standard model, MetaBayesDTA can partially account for the fact that studies often use different reference tests by introducing a covariate for test type. Conclusions: Due to its user-friendliness and features, MetaBayesDTA should appeal to a broad variety of applied researchers and encourage wider use of more advanced methods which may improve the quality of test accuracy meta-analyses. |
|
František Bartoš | Adjusting for Publication Bias with Bayesian Model-Averaging and the RoBMA R Package Publication bias presents a vital thread to meta-analysis and cumulative science. It can lead to overestimation of effect sizes and overstating the evidence against the null hypothesis. In order to mitigate the impact of publication bias, multiple methods of adjusting for publication bias were developed. However, their performance varies based on the true data generating process, and different methods often lead to conflicting conclusions. We developed a robust Bayesian meta-analysis (RoBMA) framework that uses model-averaging to combine different meta-analytic models based on their relative predictive. In other words, it allows researchers to base the inference proportionally to the degree of how well the different models predicted the data. We implemented the framework in the RoBMA R package. The package allows specification of various meta-analytic publication bias adjustment models, specification of default and informed prior distributions, and provides summaries and visualizations for the combined ensemble. |
|||
Christian Röver | Using the bayesmeta R package for Bayesian random-effects meta-regression The bayesmeta R package facilitates Bayesian meta-analysis within the simple normal-normal hierarchical model (NNHM). Using the same numerical approach, we extended the bayesmeta package to include several covariables instead of only a single "overall mean" parameter. We demonstrate the use of the package for several meta-regression applications, including modifications of regressor matrix and prior settings to implement model variations. Possible applications include consideration of continuous covariables, comparison of study subgroups, and network-meta-analysis. |
|||
15:30-17:30 | Workshop 5 | Arindam Basu | Structural Equation modelling Meta-analysis of trials and observational studies can be conceptualised as mixed effects modelling where fixed-effects meta analyses are special cases of random-effects meta-analyses. Structural equation modelling can be used to conduct meta-analyses in many ways that can extend the scope of meta-analyses. In this workshop, we will show step by step how to use structural equation modelling for conducting meta-analyses using R with metaSEM, lme4, and OpenMx packages. As an attendee, you will not need any previous experience of using these packages as we will show from start to finish with a set of preconfigured data, and you can later try with your own data sets. In the workshop, the instructor will conduct live coding and attendees will follow suit with questions and answers. All materials will be openly distributed in a github repository and be available before and after the workshop. We will use a hosted Rstudio instance, so please RSVP for this workshop so that accounts can be set up ahead of time. Register here |
|
24 Feb | 08:00-10:00 | Workshop 6 | Martin Westgate | Introduction to writing R functions/packages This workshop provides walkthroughs, examples and advice on how to go about building R functions and packages, and why you might wish to do so in the first place. It aims to discuss the benefits of using functions and packages to support your work and the work of others, and provides practical advice about when a package might be ready to 'go public'. Register here |
08:00-09:30 | Workshop 7 | Marc Lajeunesse | Wrangling large teams for research synthesis Sometimes there is the opportunity to include 100s of participants into your research synthesis project -- but how do you harness that energy into something consistent? This workshop will provide tips, tricks, and tools to managing large-team research synthesis projects. Topics covered will include: management practices, consistency upkeeping, open-access software, and open-gaps for development. Register here |
|
11:00-12:15 | Building an evidence ecosystem for tool design | Sarah Young, Trevor Riley (moderators) | [Presentations to be announced] [Summary to be announced] |
|
12:30-13:30 | Developing the synthesis community | Wolfgang Viechtbauer | The metadat Package: A Collection of Meta-Analysis Datasets for R The metadat package is a data package for R that contains a large collection of meta-analysis datasets. Development of the package started at the 2019 Evidence Synthesis Hackathon at UNSW Canberra with a first version of the package released on CRAN on 2021-08-20. As of right now, the package contains 70 datasets from published meta-analyses covering a wide variety of disciplines (e.g., education, psychology, sociology, criminology, social work, medicine, epidemiology, ecology). The datasets are useful for teaching purposes, illustrating and testing meta-analytic methods, and validating published analyses. Aside from providing detailed documentation of all included variables, each dataset is also tagged with one or multiple 'concept terms' that refer to various aspects of a dataset, such as the field/topic of research, the outcome measure used for the analysis, the model(s) used for analyzing the data, and the methods/concepts that can be illustrated with the dataset. The package also comes with detailed instructions and some helper functions for contributing additional datasets to the package. |
|
Marc Lajeunesse | Lessons on leveraging large enrollment courses to screen studies for systematic reviews Here I describe eight semesters of experimentation with various abstract screening tools, including R, HTML, CANVAS, and Adobe, with the aims to (1) improve science literacy among undergraduate students, and (2) leverage large enrollment courses to process and code vast amounts of bibliographic information for systematic reviews. I then discuss the promise of undergraduate participation for screening and classification, but emphase (1) consistent failures of tools, in terms of student accessibility and ability to combine and compare student screening decisions, and (2) my consistent inability to get consistent, high-quality screening outcomes from students. |
|||
Alexandra Bannach-Brown | ‘LearnR’ & ‘shiny’ to support the teaching of meta-analysis of data from systematic review of animal studies. Teaching meta-analysis involves combining theoretical statistical knowledge and applying theoretical aspects in practice. Teaching sessions for non-technical students involving R are often beset with technical problems such as outdated software versions, missing and conflicting dependencies, and a tendency for students to arrive on the session day without having installed required software. This causes the first hour(s) of practical sessions to turn into technical troubleshooting sessions. To circumvent these problems, we have created a self-contained web app using the ‘shiny’ and ‘LearnR’ R packages to demonstrate the capabilities of R in meta-analysis. This app runs on a web browser, without the need for students to run R or install packages on their own devices, thus allowing instructors to focus on teaching rather than technical troubleshooting. Using a dataset and code from a previously published systematic review and meta-analysis of animal studies, students are walked-through steps demonstrating theoretical and mathematical foundations of meta-analysis and ultimately replicate the analysis and results. This app supports our live educational workshops but is also designed to be a stand-alone learning resource. At each step, there are multiple choice questions for students to check their understanding of the material. We have demonstrated the use of existing R packages to generate a user-interface for students to learn meta-analysis in practice. |
|||
14:00-14:45 | Conference close | Neal Haddaway | Hackathon summary presentations and closing remarks |