4  Reproducible workflows

Required material

Key concepts and skills

4.1 Introduction

Suppose you have cancer and you have to choose between a black box AI surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate. Do you want the AI surgeon to be illegal?

Geoffrey Hinton, 20 February 2020.

The number one thing to keep in mind about machine learning is that performance is evaluated on samples from one dataset, but the model is used in production on samples that may not necessarily follow the same characteristics… So when asking the question, “would you rather use a model that was evaluated as 90% accurate, or a human that was evaluated as 80% accurate”, the answer depends on whether your data is typical per the evaluation process. Humans are adaptable, models are not. If significant uncertainty is involved, go with the human. They may have inferior pattern recognition capabilities (versus models trained on enormous amounts of data), but they understand what they do, they can reason about it, and they can improvise when faced with novelty.

François Chollet, 20 February 2020.

If science is about systematically building and organizing knowledge in terms of testable explanations and predictions, then data science takes this and focuses on data. This means that building, organizing, and sharing knowledge is a critical aspect. Creating knowledge, once, in a way that only you can do it, does not meet this standard. Hence, there is a need for reproducible data science workflows.

Alexander (2019) talks about how reproducible research means it can be exactly redone, given all the materials used. This underscores the importance of providing the code, data, and environment. The minimum expectation is that another person is independently able to use your code, data, and environment, to get your results, including figures and tables. Ironically, there are different definitions of reproducibility between disciplines. Barba (2018) surveys a variety of disciplines and concludes that the predominant language usage implies the following definitions: Reproducible research is when “[a]uthors provide all the necessary data and the computer codes to run the analysis again, re-creating the results.” A replication is a study “that arrives at the same scientific findings as another study, collecting new data (possibly with different methods) and completing new analyses.”

Regardless of what it is specifically called, Gelman (2016) identifies how large an issue this is in various social sciences. The problem with work that is not reproducible, is that it does not contribute to our stock of knowledge about the world. Since Gelman (2016), a great deal of work has been done in many social sciences and the situation has improved a little, but much work remains. And the situation is similar in the life sciences (Heil et al. 2021) and computer science (Pineau et al. 2021).

Some of the examples that Gelman (2016) talks about are not that important in the scheme of things. But at the same time, we saw, and continue to see, similar approaches being used in areas with big impacts. For instance, many governments have created “nudge” units that implement public policy (Sunstein and Reisch 2017) even though there is compelling evidence that some of the claims lack credibility (Maier et al. 2022). Governments are increasingly using algorithms that they do not make open (Chouldechova et al. 2018). And Herndon, Ash, and Pollin (2014) document how a paper in economics that was used by governments to justify austerity policies following the Global Financial Crisis turned out to not be reproducible.

At a minimum, and with few exceptions, we must release our code, datasets, and environment. Without these, it is difficult to know what a finding speaks to (Miyakawa 2020). More banally, we also do not know if there are mistakes or aspects that were inadvertently overlooked (Merali 2010; Hillel 2017; Silver 2020). Increasingly, we consider a paper to be an advertisement, and for the associated code, data, and environment to be the actual work (Buckheit and Donoho 1995). Steve Jobs, a co-founder of Apple, talked about how the best craftsmen ensure that even the aspects of their work that no one else will ever see are as well-finished and high-quality as the aspects that are public facing (Isaacson 2011). The same is true in data science, where often one of the distinguishing aspects of high-quality work is that the README and code comments are as polished as, say, the abstract of the associated paper.

Workflows exist within a cultural and social context, which imposes an additional reason for the need for them to be reproducible. For instance, Wang and Kosinski (2018) use deep neural networks to train a model to distinguish between gay and heterosexual men. (Murphy (2017) provides a summary of the paper, the associated issues, and comments from its authors). To do this, Wang and Kosinski (2018, 248) needed a dataset of photos of folks that were “adult, Caucasian, fully visible, and of a gender that matched the one reported on the user’s profile”. They verified this using Amazon Mechanical Turk, an online platform that pays workers a small amount of money to complete specific tasks. The instructions provided to the Mechanical Turk workers for this task specify that Barack Obama, the 44th US President, who had a white mother and a black father, should be classified as “Black”; and that Latino is an ethnicity, rather than a race (Mattson 2017). The classification task may seem objective, but, perhaps unthinkingly, echoes the views of Americans with a certain class and background.

This is just one specific concern about one part of the Wang and Kosinski (2018) workflow. Broader concerns are raised by others including Gelman, Mattson, and Simpson (2018). The main issue is that statistical models are specific to the data on which they were trained. And the only reason that we can identify likely issues in the model of Wang and Kosinski (2018) is because, despite not releasing the specific dataset that they used, they were nonetheless open about their procedure. For our work to be credible, it needs to be reproducible by others.

Some of the steps that we can take to make our work more reproducible include:

  1. Ensure the entire workflow is documented. This may involve addressing questions such as:
    • How was the raw dataset obtained and is access likely to be persistent and available to others?
    • What specific steps are being taken to transform the raw data in the data that were analyzed, and how can this be made available to others?
    • What analysis has been done, and how clearly can this be shared?
    • How has the final paper or report been built and to what extent can others follow that process themselves?
  2. Not worrying about perfect reproducibility initially, but instead focusing on trying to improve with each successive project. For instance, each of the following requirements are increasingly more onerous and there is no need to be concerned about not being able to the last, until we can do the first:
    • Can you run your entire workflow again?
    • Can another person run your entire workflow again?
    • Can “future-you” run your entire workflow again?
    • Can “future-another-person” run your entire workflow again?
  3. Including a detailed discussion about the limitations of the dataset and the approach in the final paper or report.

The workflow that we advocate is: Plan -> Simulate -> Acquire -> Explore -> Share. But it can be alternatively considered as: “Think an awful lot, mostly read and write, sometimes code”.

There are various tools that we can use at the different stages that will improve the reproducibility of this workflow. This includes Quarto, R Projects, and Git and GitHub.

4.2 Quarto

4.2.1 Getting started

Quarto integrates code and natural language in a way that is called “literate programming” (Knuth 1984). It is the successor to R Markdown, which was a variant of Markdown specifically designed to allow R code chunks to be included. Quatro uses a mark-up language similar to HyperText Markup Language (HTML) or LaTeX, in comparison to a “What You See Is What You Get” (WYSIWYG) language, such as Microsoft Word. This means that all the aspects are consistent, for instance, all top-level heading will look the same. But it means that we use symbols to designate how we would like certain aspects to appear. And it is only when we build the mark-up that we get to see what it looks like. A visual editor option can also be used which hides the need for the user to do this mark-up themselves.

While it makes sense to use Quarto going forward, there are still a lot of resources written for and in R Markdown. For this reason we provide the R Markdown equivalents for this section in Appendix B.

Shoulders of giants

Fernando Pérez is an associate professor, in statistics, at the University of California, Berkeley and a Faculty Scientist, Data Science and Technology Division, at Lawrence Berkeley National Laboratory. He took a PhD in in particle physics from the University of Colorado, Boulder. During his PhD he created iPython, which enables Python to be used interactively, and now underpins Project Jupyter, which inspired programs such as R Markdown and is an alternative to Quarto. In 2017 he was awarded the ACM Software System Award.

One advantage of literate programming is that we get a “live” document in which code executes and then forms part of the document. Another advantage of Quarto is that very similar code can compile into a variety of documents, including HTML pages and PDFs. Quarto also has default options set up for including title, author, and date sections. One disadvantage is that it can take a while for a document to compile because all the code needs to run. Tierney (2022) provides an especially useful and detailed Quarto usage guide.

We need to download Quarto from here. Or if using RStudio Cloud then it is already built in. We can then create a new Quarto document within RStudio (“File” -> “New File” -> “Quarto Document…”).

4.2.2 Essential commands

Like R Markdown, Quatro uses a variation of Markdown as its underlying syntax. Essential markdown commands include those for emphasis, headers, lists, links, and images. A reminder of these is included in RStudio (“Help” -> “Markdown Quick Reference”). It is your choice as to whether you want to use the visual or source editor. But either way, it is good to understand these essentials because it will not always be possible to use a visual editor, for instance if you are quickly looking at a Quarto document in GitHub. Also RStudio is sometimes laggy and it can be useful to use a text editor, such as Sublime Text, or VS Code.

  • Emphasis: *italic*, **bold**
  • Headers (these go on their own line with a blank line before and after):
         # First level header
         
         ## Second level header
         
         ### Third level header
  • Unordered list, with sub-lists:
    * Item 1
    * Item 2
        + Item 2a
        + Item 2b
  • Ordered list, with sub-lists:
    1. Item 1
    2. Item 2
    3. Item 3
        + Item 3a
        + Item 3b
  • URLs can be added: [this book](https://www.tellingstorieswithdata.com) results in this book.
  • A paragraph is created by leaving a blank line.
A paragraph about some idea, nicely spaced from the following paragraph.

Another paragraph about another idea, nicely spaced from the earlier paragraph.

Once we have added some aspects, then we may want to see the actual document. To build the document click “Render”.

4.2.3 R chunks

We can include code for R and many other languages in code chunks within a Quarto document. Then when we render the document, the code will run and be included in the document.

To create an R chunk, we start with three backticks and then within curly braces we tell Quarto that this is an R chunk. Anything inside this chunk will be considered R code and run as such. For instance, we could load the tidyverse and AER and make a graph of the number of times a survey respondent visited the doctor in the past two weeks.

```{r}
library(tidyverse)
library(AER)

data("DoctorVisits", package = "AER")

DoctorVisits |>
  ggplot(aes(x = illness)) +
  geom_histogram(stat = "count")
```

The output of that code is Figure 4.1.

Figure 4.1: Number of illnesses in the past two weeks, based on the 1977–1978 Australian Health Survey

There are various evaluation options that are available in chunks. We include these, each on a new line, by opening the line with the chunk-specific comment delimiter “#|” and then the option. Helpful options include:

  • echo: This controls whether the code itself is included in the document. For instance, echo: false would mean the code will be run and its output will show, but the code itself would not be included in the document.
  • include: This controls whether the output of the code is included in the document. For instance, include: false would run the code, but would not result in any output, and the code itself would not be included in the document.
  • eval: This controls whether the code should be included in the document. For instance, eval: false would mean that the code is not run, and hence there would not be any output to include, but the code itself would be included in the document.
  • warning: This controls whether warnings should be included in the document. For instance, warning: false would mean that warnings are not included.
  • message: This controls whether messages should be included in the document. For instance, message: false would mean that messages are not included in the docuemnt.

For instance, we could include the output, but not the code, and suppress any warnings.

```{r}
#| echo: false
#| warning: false

library(tidyverse)
library(AER)

data("DoctorVisits", package = "AER")

DoctorVisits %>%
  ggplot(aes(x = visits)) +
  geom_histogram(stat = "count")
```

It is important to leave a blank line on either side of an R chunk, otherwise it may not run properly. It is also important that lower case is used for logical values, i.e. “false” not “FALSE”.

Most people did not visit a doctor in the past week.

```{r}
#| echo: false
#| warning: false

library(tidyverse)
library(AER)

data("DoctorVisits", package = "AER")

DoctorVisits %>%
  ggplot(aes(x = visits)) +
  geom_histogram(stat = "count")
```

There were some people that visited a doctor once, and then very few people that visited two or more times.

It is also important that the Quarto document itself loads any datasets that are needed. It is not enough that they are in the environment. This is because the Quarto document evaluates the code in the document when it is built, not necessarily the environment.

4.2.4 Top matter

Top matter consists of defining aspects such as the title, author, and date. It is contained within three dashes at the top of a Quarto document. For instance, the following would specify a title, date that automatically updated to the date the document was rendered, and an author.

---
title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
format: html
---

An abstract is a short summary of the paper, and we could add that to the top matter as well.

---
title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
abstract: "This is my abstract."
format: html
---

By default, Quarto will create an HTML document, but we can change the output format to produce a PDF. This uses LaTeX in the background and may require the installation of supporting packages. In particular it is common to need to first install tinytex (Xie 2019).

---
title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
abstract: "This is my abstract."
format: pdf
---

We can include references by specifying a BibTeX file in the top matter and then calling it within the text, as needed.

---
title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
format: pdf
abstract: "This is my abstract."
bibliography: bibliography.bib
---

We would need to make a separate file called “bibliography.bib” and save it next to the Quarto file. In the BibTeX file we need an entry for the item that is to be referenced. For instance, the citation for R can be obtained with citation() and this can be added to the “bibliography.bib” file. Similarly, the citation for a package can be found by including the package name, for instance citation("tidyverse"). It can be helpful to use Google Scholar, or doi2bib, to get citations for books or articles.

@Manual{,
    title = {R: A Language and Environment for Statistical Computing},
    author = {{R Core Team}},
    organization = {R Foundation for Statistical Computing},
    address = {Vienna, Austria},
    year = {2021},
    url = {https://www.R-project.org/},
  }
@Article{,
    title = {Welcome to the {tidyverse}},
    author = {Hadley Wickham and Mara Averick and Jennifer Bryan and Winston Chang and Lucy D'Agostino McGowan and Romain François and Garrett Grolemund and Alex Hayes and Lionel Henry and Jim Hester and Max Kuhn and Thomas Lin Pedersen and Evan Miller and Stephan Milton Bache and Kirill Müller and Jeroen Ooms and David Robinson and Dana Paige Seidel and Vitalie Spinu and Kohske Takahashi and Davis Vaughan and Claus Wilke and Kara Woo and Hiroaki Yutani},
    year = {2019},
    journal = {Journal of Open Source Software},
    volume = {4},
    number = {43},
    pages = {1686},
    doi = {10.21105/joss.01686},
  }

We need to create a unique key that we use to refer to this item in the text. This can be anything, provided it is unique, but meaningful ones can be easier to remember, for instance “citeR”.

@Manual{citeR,
    title = {R: A Language and Environment for Statistical Computing},
    author = {{R Core Team}},
    organization = {R Foundation for Statistical Computing},
    address = {Vienna, Austria},
    year = {2021},
    url = {https://www.R-project.org/},
  }
@book{tellingstories,
    title = {Telling Stories with Data},
    author = {Rohan Alexander},
    year = {2022},
    publisher = {CRC Press},
    url = {https://tellingstorieswithdata.com}
  }

To cite R in the Quarto document we include @citeR, which would put the brackets around the year, like this: R Core Team (2022), or [@citeR], which would put the brackets around the whole thing, like this: (R Core Team 2022).

The reference list at the end of the paper is automatically built based on calling the BibTeX file and including the references in the paper. At the end of the Quarto document, including a heading “# References” and the actual citations will be included after that. When the Quarto file is rendered, Quarto sees these in the content, goes to BibTeX to get the reference details that it needs, builds the reference list, and then adds it to the end of the rendered document.

4.2.5 Cross-references

It can be useful to cross-reference figures, tables, and equations. This makes it easier to refer to them in the text. To do this for a figure we refer to the name of the R chunk that creates or contains the figure. For instance, consider the following code.

```{r}
#| label: fig-uniquename
#| fig-cap: Number of illnesses in the past two weeks, based on the 1977--1978 Australian Health Survey
#| echo: true
#| warning: false

library(tidyverse)
library(AER)

data("DoctorVisits", package = "AER")

DoctorVisits |>
  ggplot(aes(x = illness)) +
  geom_histogram(stat = "count")
```

Then (@fig-uniquename) would produce: (Figure 4.2) as the name of the R chunk is fig-uniquename. We need to add “fig” to the start of the chunk name so that Quarto knows that this is a figure. We then include a “fig-cap:” in the R chunk that specifies a caption.

Figure 4.2: Number of illnesses in the past two weeks, based on the 1977–1978 Australian Health Survey

We can add #| layout-ncol: 2 in an R chunk within a Quarto document to have two graphs appear side-by-side (Figure 4.3). Here Figure 4.3 (a) uses the minimal theme, and Figure 4.3 (b) uses the classic theme. These both cross-reference the same label #| label: fig-doctorgraphsidebyside in the R chunk, with an additional option added in the R chunk of #| fig-subcap: ["Number of illnesses","Number of visits to the doctor"] which provides the sub-cations. The addition of letters in-text is accomplished, by adding “-1” and “-2” to the end of the label when it is used in-text: (@fig-doctorgraphsidebyside), @fig-doctorgraphsidebyside-1, and @fig-doctorgraphsidebyside-2.

DoctorVisits |>
  ggplot(aes(x = illness)) +
  geom_histogram(stat = "count")

DoctorVisits |>
  ggplot(aes(x = visits)) +
  geom_histogram(stat = "count")

(a) Number of illnesses

(b) Number of visits to the doctor

Figure 4.3: Two variants of Doctor Visits

We can take a similar approach to cross-reference tables. For instance, (@tbl-docvisittable) will produce: (Table B.1). In this case we specify “tbl” at the start of the label so that Quarto knows that it is a table. And we specify a caption for the table with “tbl-cap:”.

```{r}
#| label: tbl-docvisittable
#| echo: true
#| tbl-cap: "Number of visits to the doctor in the past two weeks, based on the 1977--1978 Australian Health Survey"

DoctorVisits |> 
  count(visits) |> 
  knitr::kable()
```
Table 4.1: Number of visits to the doctor in the past two weeks, based on the 1977–1978 Australian Health Survey
visits n
0 4141
1 782
2 174
3 30
4 24
5 9
6 12
7 12
8 5
9 1

Finally, we can also cross-reference equations. To that we need to add a tag {#eq-macroidentity} which we then reference.

$$
Y = C + I + G + (X - M)
$$ {#eq-macroidentity}

For instance, we then use @eq-macroidentity to produce Equation 12.1.

\[ Y = C + I + G + (X - M) \tag{4.1}\]

When using cross-references, it is important that the labels are relatively simple. In general, try to keep the names simple but unique, avoid punctuation and stick to letters and hyphens. Do not use underscores, because they can cause an error.

4.3 R projects and file structure

Projects are widely used in software development and exist to keeps all the files (data, analysis, report, etc) associated with a particular project together and related to each other. An R project can be created in RStudio “File” -> “New Project”, then select “Empty project”, name the project and decide where to save it. For instance, a project focused on maternal mortality, may be called “maternalmortality”, and might be saved within a folder of other projects. The use of R projects enables “reliable, polite behavior across different computers or users and over time.” (Bryan and Hester 2020). This is because it removes the context of that folder from its broader existence. So files exist in relation to the base of the R project, not the base of the computer.

Once a project has been created, a new file with the extension “.RProj” will appear in that folder. As an example, of a folder with an R Project, an example Quarto document, and an appropriate file structure is available here. That can be downloaded: “Code” -> “Download ZIP”.

The main advantage of using an R Project is that we are more easily able to reference other files in a self-contained way. That means when others want to reproduce our work, they know that all the file references and structure should not need to be changed. It means that files are referenced in relation to where the “.Rproj” file is. For instance, instead of reading a csv from, say, "~/Documents/projects/book/data/" you can read it in from book/data/. It may be that someone else does not have a “projects” folder, and so the former would not work for them, while the latter would.

The use of R projects is required to meet the minimal level of reproducibility. The use of functions such as setwd(), and computer-specific file paths, bind work to a specific computer in a way that is not appropriate. Trisovic et al. (2022) describe the use of absolute paths, rather than relative paths, as a common error that they had to correct in their large-scale study of R code, uploaded to the Harvard Dataverse, that underpins research papers.

There are a variety of ways to set-up a folder. A variant of Wilson et al. (2017) that is often useful is shown in the example file structure. Here we have an “inputs” folder that contains raw data (which should never be modified (Wilson et al. 2017)) and literature related to the project (which cannot be modified). An “outputs” folder contains data that we create using R, as well as the paper that we are writing. And a “scripts” folder is what modifies the raw data and saves it into “outputs”. We will do most of our work in “scripts”, and the Quarto file for the paper in “outputs”. Useful other aspects include a “README.md” which will specify overview details about the project, and a LICENSE. An example of what to put in the README is here. Another helpful variant of this project skeleton is provided by Mineault and The Good Research Code Handbook Community (2021).

4.4 Version control

We implement version control through a combination of Git and GitHub. There are a variety of reasons for this including:

  1. Enhancing the reproducibility of work by making it easier to share code and data;
  2. Making it easier to share work;
  3. Improving workflow by encouraging systematic approaches; and
  4. Making it easier to work in teams.

Git is a version control system with a fascinating history (Brown 2018). The way one often starts doing version control is to have various versions of the one file: “first_go.R”, “first_go-fixed.R”, “first_go-fixed-with-mons-edits.R”. But this soon becomes cumbersome. One often soon turns to dates, for instance: “2022-01-01-analysis.R”, “2022-01-02-analysis.R”, “2022-01-03-analysis.R”, etc. While this keeps a record it can be difficult to search when we need to go back, because it can be difficult to remember the date some change was made. In any case, it quickly gets unwieldy for a project that is being regularly worked on.

Instead of this, we use Git so that we can have one version of the file, say, “analysis.R” and then use Git to keep a record of the changes to that file, and a snapshot of that file at a given point in time. We determine when Git takes that snapshot, and when we take that snapshot. We additionally include a message saying what changed between this snapshot and the last. In that way, there is only ever one version of the file, but the history can be more easily searched.

One complication is that Git was designed for teams of software developers. As such, while it works, it can be a little ungainly for non-developers. But in general it is the case that Git has been usefully adapted for data science, even when the only collaborator one may have is one’s future self (Bryan 2018a).

GitHub, GitLab, and various other companies offer easier-to-use services that build on Git. While there are tradeoffs, we introduce GitHub here because it is the predominant platform (Eghbal 2020, 21). Git and GitHub are built into RStudio Cloud, which provides a nice option if you have issues with local installation. One of the initial challenging aspects of Git is the terminology. Folders are called “repos”. Creating a snapshot is called a “commit”. One gets used to it eventually, but feeling confused initially is normal. Bryan (2020) is especially useful for setting up and using Git and GitHub.

4.4.1 Git

We first need to git check whether Git is installed. Open RStudio, go to the Terminal, type the following, and then enter/return.

git --version

If you get a version number, then you are done (Figure 4.4).

Figure 4.4: Using Terminal to check whether Git is installed in RStudio

Git is pre-installed in RStudio Cloud, it should be pre-installed on Mac, and it may be pre-installed on Windows. If you do not get a version number in response, then you need to install it. To do that you should follow the instructions specific to your operating system in Bryan (2020, chap. 5).

Given Git is installed we need to tell it our username and email. We need to do this because Git adds this information whenever we take a “snapshot”, or to use Git’s language, whenever we make a commit.

Again, within the Terminal, type the following, replacing the details with yours, and then enter/return after each line.

git config --global user.name "Rohan Alexander"
git config --global user.email "rohan.alexander@utoronto.ca"
git config --global --list

When this set-up has been done properly, the values that you entered for “user.name” and “user.email” will be returned after the last line (Figure 4.5).

Figure 4.5: Adding a username and email address to Git in RStudio

These details–username and email address–will be public. There are various ways to hide the email address if necessary, and GitHub provides instructions about this. Bryan (2020, chap. 7) provides more detailed instructions about this step, and a trouble-shooting guide.

4.4.2 GitHub

Now that Git is set-up, we need to set-up GitHub. We created an account in Chapter 2, which we use again here. After being signed in we first need to make a new folder, which is called a “repo” in Git. Look for a “+” in the top right, and then select “New Repository” (Figure 4.6).

Figure 4.6: Start process of creating a new repository

At this point we can add a sensible name for the repo. Leave it as “public” for now, because it can always be deleted later. And check the box to “Initialize this repository with a README”. Change “Add .gitignore” to R. After that, click “Create repository” (Figure 4.7).

Figure 4.7: Creating a new repository in GitHub

This will take us to a screen that is fairly empty, but the details that we need are in the green “Clone or Download” button, which we can copy by clicking the clipboard (Figure 4.8).

Figure 4.8: Copy the URL of the new repository

Now returning to RStudio, in RStudio Cloud, we create a “New Project” using “New Project from Git Repository”. It will ask for the URL that we just copied (Figure 4.9). If you are using a local computer, then this same step is accomplished through the menu: “File” -> “New Project…” -> “Version Control” -> “Git”, then paste in the URL, give the folder a meaningful name, check “Open in new session”, then “Create Project”.

Figure 4.9: Adding the project to RStudio Cloud

At this point, a new folder has been created locally that we can use. We will want to be able to push it back to GitHub, and for that we will need to use a Personal Access Token (PAT) to link our RStudio Workspace with our GitHub account. We use usethis (Wickham and Bryan 2020) and gitcreds (Csárdi 2020) to enable this. These packages are, respectively, a package that automates repetitive tasks, and a package that authenticates with GitHub. To create a PAT, while signed into GitHub in the browser, run usethis::create_github_token() in your R session. GitHub will open in the browser with various options filled out (Figure 4.10). It can be useful to give the PAT an informative name by replacing “Note”, for instance “PAT for RStudio”, then “Generate token”.

Figure 4.10: Creating a PAT

We only have one shot to copy this token, and if we make a mistake then we will need to generate a new one. Do not include the PAT in any R script or Quarto document. Instead run gitcreds::gitcreds_set(), which will then prompt you to add your PAT in the console.

To use GitHub for a project that we are actively working on we follow a procedure:

  1. The first thing to do is almost always to get any changes with “pull”. To do this, open the Git pane in RStudio, and click the blue down arrow. This gets any changes to the folder, as it is on GitHub, into our own version of the folder.
  2. We can then make our changes to our copy of the folder. For instance, we could update the README, and then save it as normal.
  3. Once this is done, we need to “add”, “commit”, and “push”. In the Git pane in RStudio, select the files to be added. This adds them to the staging area. Then click “Commit” (Figure 4.11). A new window will open. Add a commit message which is informative about the change that was made, and then click “Commit” in that new window (Figure 4.12). Finally, click “Push” to send the changes to GitHub.

Figure 4.11: Adding files to be committed

Figure 4.12: Making a commit

There are a few common pain-points when it comes to Git and GitHub. We recommend committing and pushing regularly, especially when you are new to version control. This increases the number of snapshots that you could come back to if needed. All commits should have an informative commit message. If you are new to version control, then the expectation of a good commit message is that it contains a short summary of the change, followed by a blank line, and then an explanation of the change including what the change is, and why it is being made. For instance, if your commit adds graphs to a paper, then a commit message could be:

Add graphs

Graphs of unemployment and inflation added into Data section.

There is some evidence of a relationship between overall quality and commit behavior (Sprint and Conci 2019). In an ideal scenario the commit messages act as a kind of journal of the project.

Git and GitHub were designed for software developers, rather than data scientists. GitHub limits the size of the files it will consider to 100MB, and even 50MB will prompt a warning. Data science projects regularly involve datasets that are larger than this. In Chapter 12 we discuss the use of data deposits, which can be especially useful when a project is completed, but when we are actively working on a project it can be useful to ignore the file, at least as far as Git and GitHub are concerned. We do this using a “.gitignore” file, in which we list all of the files that we do not want to track using Git. The starter folder contains an example “.gitignore” file. And it can be helpful to run usethis::git_vaccinate(), which will add a variety of files to a global “.gitignore” file in case you forget to do it on a project-basis. In particular Mac users will find it useful that this will cause “.DS_Store” files to be ignored.

We used the Git pane in RStudio which removed the need to use the Terminal, but it did not remove the need to go to GitHub and set-up a new project. Having set-up Git and GitHub, we can further improve this aspect of our workflow with usethis (Wickham and Bryan 2020).

First check that Git is set-up with usethis::git_sitrep(). This should print information about the username and email. We can use usethis::use_git_config() to update these details if needed.

usethis::use_git_config(
  user.name = "Rohan Alexander",
  user.email = "rohan.alexander@utoronto.ca"
)

Rather than starting a new project in GitHub, and then adding it locally, we can now use usethis::use_git() to initiate it and commit the files. Having committed, we can use usethis::use_github() to push to GitHub, which will create the folder on GitHub as well.

4.5 Using R in practice

4.5.1 Dealing with errors

When you are programming, eventually your code will break, when I say eventually, I mean like probably 10 or 20 times a day.

Gelfand (2021)

Everyone who uses R, or any programming language for that matter, has trouble find them at some point. This is normal. Programming is hard. At some point code will not run or will throw an error. This happens to everyone. It is common to get frustrated, but to move forward we develop strategies to work through the issues:

  1. If you are getting an error message, then sometimes it will be useful. Try to read it carefully to see if there is anything of use in it.
  2. Try to search, say on Google, for the error message. It can be useful to include “tidyverse” or “in R” in the search to help make the results more appropriate. Sometimes Stack Overflow results can be useful.
  3. Look at the help file for the function, by putting “?” before the function, for instance, ?pivot_wider(). A common issue is to use a slightly incorrect argument name or format, such as accidentally including a string instead of an object name.
  4. Look at where the error is happening and remove or comment out code until the error is resolved, and then slowly add code back again.
  5. Check the class of the object, with class(), for instance, class(data_set$data_column). Ensure that it is what is expected.
  6. Restart R (“Session” -> “Restart R and Clear Output”) and load everything again.
  7. Restart the computer.
  8. Search for what you are trying to do, rather than the error, being sure to include “tidyverse” or “in R” in the search to help make the results more appropriate. For instance, “save PDF of graph in R made using ggplot”. Sometimes there are relevant blog posts or Stack Overflow answers that will help.
  9. Making a small, self-contained, reproducible example “reprex” to see if the issue can be isolated and to enable others to help.

More generally, while this is rarely possible to do, it is almost always helpful to take a break and come back the next day.

4.5.2 Reproducible examples

No one can advise or help you—no one. There is only one thing you should do. Go into yourself.

Rilke (1929)

Asking for help is a skill like any other. We get better at it with practice. It is important to try not to say “this doesn’t work”, “I tried everything”, “your code does not work”, or “here is the error message, what do I do?”. In general, it is not possible to help based on these comments, because there are too many possible issues. You need to make it easy for others to help you. This involves a few steps.

  1. Provide a small, self-contained, example of your data, and code, and detail what is going wrong.
  2. Document what you have tried so far, including which Stack Overflow and RStudio Community pages have you looked at, and why are they not quite what you are after?
  3. Be clear about the outcome that you would like.

Begin by creating a minimal REPRoducible EXample–a “reprex”. This is code that contains what is needed to reproduce the error, but only what is needed. This means that the code it likely a smaller, simpler, version that nonetheless reproduces the error.

Sometimes this process enables one to solve the problem. If it does not, then it gives someone else a fighting chance of being able to help. There is almost no chance that you have got a problem that someone has not addressed before. It is more likely that the main difficulty is in trying to communicate what you are trying to do and what is happening, in a way that allows others to recognize both. Developing tenacity is important.

To develop reproducible examples, reprex (Bryan et al. 2019) is especially useful. To use it we:

  1. Load the reprex package: library(reprex).
  2. Highlight, and copy, the code that is giving issues.
  3. Run reprex() in the Console.

If the code is self-contained, then it will preview in the Viewer. If it is not, then it will error, and the code needs to be re-written so that it is self-contained.

If you need data to reproduce the error, then you should use data that is built into R. There are a large number of datasets that are built into R and can be seen using library(help = "datasets"). But if possible, you should use a common option such as “mtcars” or “faithful”. Combining a reprex with a GitHub Gist that was introduced in Chapter 2, increases the chances that someone is able to help you.

4.5.3 Mentality

(Y)ou are a real, valid, competent user and programmer no matter what IDE you develop in or what tools you use to make your work work for you

(L)et’s break down the gates, there’s enough room for everyone

Sharla Gelfand, 10 March 2020.

If you write code, then you are a programmer, regardless of how you do it, what you are using it for, or who you are. But there are a few traits that one tends to notice great programmers have in common.

  • Focused: Often having an aim to “learn R” or something similar tends to be problematic, because there is no real end point to that. It tends to be more efficient to have smaller, more specific goals, such as “make a histogram about the 2022 Australian Election with ggplot”. This is something that can be focused on and achieved in a few hours. The issue with goals that are more nebulous, such as “I want to learn R”, is that it becomes easy to get lost on tangents, much more difficult to get help. This can be demoralizing and lead to folks quitting too early.
  • Curious: It is almost always useful to have a go. In general, the worst that happens is that you waste your time. You can rarely break something irreparably with code. If you want to know what happens if you pass a “vector” instead of a “dataframe” to ggplot() then try it.
  • Pragmatic: At the same time, it can be useful to stick within reasonable bounds, and make one small change each time. For instance, say you want to run some regressions, and are curious about the possibility of using the tidymodels package (Kuhn and Wickham 2020) instead of lm(). A pragmatic way to proceed is to use one aspect from the tidymodels package initially and then make another change next time.
  • Tenacious: Again, this is a balancing act. There are always unexpected problems and issues with every project. On the one hand, persevering despite these is a good tendency. But on the other hand, sometimes one does need to be prepared to give up on something if it does not seem like a break-through is possible. Here mentors can be useful as they tend to be a better judge of what is reasonable. It is also where appropriate planning is useful.
  • Planned: It is almost always useful to excessively plan what you are going to do. For instance, you may want to make a histogram of the 2019 Canadian Election. You should plan the steps that are needed and even to sketch out how each step might be implemented. For instance, the first step is to get the data. What packages might be useful? Where might the data be? What is the back-up plan if the data do not exist there?
  • Done is better than perfect: We all have various perfectionist tendencies to a certain extent, but it can be useful to initially try to turn them off to a certain extent. In the first instance, try to write code that works, especially in the early days. You can always come back and improve aspects of it. But it is important to actually ship. Ugly code that gets the job done is better than beautiful code that is never finished.

4.5.4 Code comments and style

Code must be commented. Comments should focus on why certain code was written, (and to a lesser extent, why a common option is not selected). Indeed, it can be a good idea to write the comments before you write the code, explaining what you want to do and why, and then returning to write the code (Fowler and Beck 2018, 59).

There is no one way to write code, especially in R. However, there are some general guidelines that will make it easier for you even if you are just working on your own. It is important to recognize that most projects will evolve over time, and one purpose served by code comments are as “[m]essages left for your future self (or near-future others) [that] help retrace and justify your decisions” (Bowers and Voors 2016).

Comments in R script files can be added by including the # symbol. We do not have to put a comment at the start of the line, it can be midway through. In general, we do not need to comment what every aspect of your code is doing but we should comment parts that are not obvious. For instance, if we read in some value then we may like to comment where it is coming from.

You should comment why you are doing something (Wickham 2021). What are you trying to achieve?

You must comment to explain weird things. Like if you are removing some specific row, say row 27, then why are you removing that row? It may seem obvious in the moment, but future-you in six months will not remember.

You should break your code into sections. For instance, setting up the workspace, reading in datasets, manipulating and cleaning the dataset, analyzing the datasets, and finally producing tables and figures. Each of these should be separated with comments explaining what is going on, and sometimes into separate files, depending on the length.

Additionally, at the top of each file it is important to note basic information, such as the purpose of the file, and pre-requisites or dependencies, the date, the author and contact information, and finally and red-flags or todos.

At the very least every R script needs a preamble and a clear demarcation of sections.

#### Preamble ####
# Purpose: Brief sentence about what this script does
# Author: Your name
# Data: The date it was written
# Contact: Add your email
# License: Think about how your code may be used
# Pre-requisites: 
# - Maybe you need some data or some other script to have been run?


#### Workspace setup ####
# do not keep the install.packages line - comment out if need be
# Load packages
library(tidyverse)

# Read in the raw data. 
raw_data <- readr::read_csv("inputs/data/raw_data.csv")


#### Next section ####
...

Examples of nicely commented code include those from: Bob Nystrom, Dolatsara et al. (2021), Mathew Hauer and James Byars, and Jason Burton, Nicole Cruz, and Ulrike Hahn (Burton, Cruz, and Hahn 2021).

Finally, never rely on a user commenting and uncommenting code, or any other manual step, such as directory specification, for code to work. This will preclude the use of automated code checking and testing.

4.5.5 Tests

Tests need to be written throughout the code, and we need to write them as we go, not all at the end. Will this slow you down? Yes. But it will help you to think, and to fix mistakes, which will make your code better and lead to better overall productivity. Code without tests needs to be viewed with suspicious and never given the benefit of the doubt. The need for other people, and ideally, automated processes, to run tests on code is one reason that we emphasize reproducibility, as well as smaller aspects such as not hardcoding file-paths, and not having spaces in file names.

It is difficult to define a complete and general, suite of tests, but broadly we want to test:

  1. boundary conditions,
  2. classes,
  3. missing data,
  4. the number of observations and variables,
  5. duplicates, and
  6. regression results.

We do all this initially on our simulated data and then move to the real data. It is of course possible to write an infinite number of tests, but a smaller number of high-quality tests is much better than many thoughtless tests.

Assertions are written throughout the code. They check whether something is true and stop the code from running if not (Irving et al. 2021, 272). In writing tests for data science, they will typically be used in data cleaning and preparation scripts. We have more to say about these in Chapter 11. Unit tests check some complete aspect of code (Irving et al. 2021, 274). We will consider them more in Chapter 14 when we consider modelling.

4.6 Efficiency

In general, we are, and will continue to be, largely concerned with just getting something done. Not necessarily getting it done in the best or most efficient way. And to a large extent, being worried about getting something done in the best or most efficient way is almost always a waste of time. Until it is not. Until it is not, by and large, worrying about performance is a waste of time. For the most part we are better off just pushing things into the cloud, letting them run for a reasonable time, and using that time to worry about other aspects of the pipeline, except that becomes unfeasible. At a certain point, and this widely differs, efficiency becomes very important. Eventually ugly or slow code, and dogmatic insistence on a particular way of doing things, have an effect. And it is at that point that we need to be open to new approaches to ensure efficiency. These are sometimes derided as matters of taste. They are not. They are critical aspects of writing code as a professional, which is something every data scientist does. There is rarely a most common area for obvious performance gains. Instead, it is important to develop the ability to measure, evaluate, and think.

We start with code linting and styling. This does not speed up our code, per se, but we will instead use linters to make out code easier to read, which makes it more efficient when another person comes to it, or we revisit it. This enables code review and refactoring, which is where we re-write code to make it better, while not changing what it does. We then turn to measurement of run time, and introduce parallel processing, where we allow our computer to run code multiple processes at the same time.

The first step to improving the efficiency of our code is to bring in a second pair of eyes. To make the most of their time, it is important that our code is first consistently styled and aligned with a style guide. After this, we can go through a formal process of code review. One common area to find efficiency is by running code in parallel. We may also re-write the code so that it does the same thing, but in a different way.

4.6.1 Sharing a code environment

We have discussed at length the need to share code, and we have put forward an approach to this using GitHub. And in Chapter 12, we will discuss sharing data. But, there is another requirement to enable other people to run our code. In Chapter 2 we discussed how R itself, as well as R packages update from time to time, as new functionality is developed, errors fixed, and other general improvements. And in Chapter 3 we described how one advantage of the tidyverse is that it can update faster than base, because it is more specific. However this could mean that even if we were to share all the code and data that we use, it is possible that the software versions that are available would mean the code would not run.

The solution to this is to detail the environment that was used. There is a large number of ways to do this, and they can add a great deal of complexity, but the minimum standard is to document the version of R and R packages that were used, and make it easier for others to install that exact version. We use renv (Ushey 2022) to do this.

We first use init() to get the infrastructure set-up that we will need. In particular we are going to create a file that that will record the packages and versions used. We use snapshot() to actually document what we are using. This creates a “lockfile” that records the information.

If we want to see which packages we are using in the R project, then we can use dependencies(). Doing this for the “starter_folder” indicates that the following packages are used: rmarkdown, bookdown, knitr, rmarkdown, bookdown, knitr, palmerpenguins, tidyverse, renv, haven, readr, and tidyverse.

We could open the lockfile file “renv.lock”, to see the exact versions if we wanted. The lockfile also documents all the other packages that were installed and where they were downloaded from. In the case of CRAN, it maintains an archive of all the versions of the packages, and so it is possible to get any version.

Someone coming to this project from outside, could then use restore() which would install the exact version of the packages that we used.

4.6.2 Code linting and styling

Being fast is valuable but it is mostly about being able to iterate fast, not necessarily having code that runs fast. Backus (1981, 26) describes how even in 1954 a programmer cost at least as much as a computer, and these days compute is usually much cheaper than a programmer. Hence, it is important to use other people’s time efficiently.

Linting and styling is the process of checking code, mostly for stylistic issues, and re-arranging code to make it easier to read. (There is another aspect of linting, which is dealing with programming errors, such as forgetting a closing bracket, but here we focus on stylistic issues.) In general, the best efficiency gain that we can make is to make it easier for others to read our code, even if this is just ourselves returning to the code after a break. Jane Street, the US proprietary trading firm, place a very strong focus on ensuring their code is readable, as a core part of risk mitigation (Minsky 2011). While we may not all have billions of dollars under the mecurial management of our code, we all would likely prefer that our code does not produce errors.

We use lint() from lintr (Hester et al. 2022) to lint our code. For instance, consider the following R code (saved as “linting_example.R”).

SIMULATED_DATA <-
  tibble(
    division = c(1:150,151),
    party = sample(
      x = c('Liberal'),
      size = 151,
      replace = T
    ))
library(lintr)

lint(filename = "linting_example.R")

The result is that the file “linting_example.R” is opened and the issues that lint() found are printed in “Markers” (Figure 4.13). It is then up to you to deal with the issues.

Figure 4.13: Linting results

Making the recommended changes results in code that is more readable, and consistent with best practice, as defined by Wickham (2021).

simulated_data <-
  tibble(
    division = c(1:150, 151),
    party = sample(
      x = c("Liberal"),
      size = 151,
      replace = TRUE
    )
  )

At first it may seem that some aspects that the linter is identifying, like trailing whitespace and only using double quotes are small and inconsequential. But they distract from being able to fix bigger issues. Further, if we are not able to get small things right, then how could anyone trust that we could get the big things right? Therefore, it is important to have dealt with all the small things that a linter identifies.

In addition to lintr, we also use a styler, styler (Müller and Walthert 2022). This will automatically adjust style issues, in contrast to the linter, which gave us a list of issues to look at. To run this we use style_file().

library(styler)

style_file(path = "linting_example.R")

This will automatically make changes, such as spacing and indentation, so it is important to do this regularly, rather than only once at the end of a project, so as to be able to review the changes.

4.6.3 Code review and refactoring

Having dealt with all of these aspects of style, we can turn to code review. This is the process of having another person go through and critique the code. Code review is a critical part of writing code, and Irving et al. (2021, 465) describe it as “the most effective way to find bugs”. It is especially helpful, although quite daunting, when learning to code because getting feedback is a great way to improve.

When reviewing another person’s code, it is important to go out of the way to be polite. Small aspects to do with style, things like spacing and separation, should have been taken care of by a linter and styler, but if not, then we might make a general recommendation about that. We are more looking for important aspects such as:

  1. Are there any magic numbers that could be changed to variables and explained?
  2. Are all the names consistent and meaningful?
  3. Is there any duplicated code?
  4. Are there sufficient comments?
  5. Are there appropriate and sufficient tests?
  6. Can the functions be understood, and are they well-documented?
  7. Are corner solutions considered? Are there any other corner solutions or edge cases that have been missed?
  8. Is the structure of the project appropriate?
  9. Are there any especially hard-to-understand aspects of the code?
  10. Are there defects that have been missed?
  11. Is there any code that is being copy-pasted?
  12. Can we change any of the code to data (Irving et al. 2021, 462)? To understand this, consider if we had some code that looked for the names of prime ministers and presidents. When we first wrote this code we likely added the relevant names into the function. Instead, we could create a small dataset of the names, and then have the code lookup that dataset.
  13. Are there any outstanding warnings?
  14. Can large functions be separated into smaller ones?

Code review ensures that the code can be understood by at least one other person. This is a critical part of building knowledge about the world. At Google, code review is not primarily about finding defects although that may happen but is instead about ensuring readability and maintainability as well as education (Sadowski et al. 2018a). This is also the case at Jane Street, where they use code review to catch bugs, share institutional knowledge through ensure what is known by one programmer is also known by others, assist with training, and oblige staff to write code that can be read (Minsky 2015).

Finally, code review does not have to, and should not, be an onerous days-consuming process of reading all the code. The best code review is a quick review of just one file, focused on suggesting changes to just a handful of lines. Indeed, it may be better to have a review done by a small team of people rather than one individual.

Having had code reviewed, then we may need to refactor it. To refactor code means to re-write it so that the new code achieves the same outcome as the old code, it is just that the new code does it better. For instance, Chawla (2020) discuss how the code underpinning an important UK COVID model was initially written by epidemiologists, and months later clarified and cleaned-up by a team from the Royal Society, Microsoft, and GitHub. This was valuable because it provided more confidence in the model, even though both versions produced the same outputs, given the same inputs.

When we start to refactor our code, we want to make sure that the re-written code achieves the same outcomes as the original code. This means that it is important to have an extensive suite of appropriate tests written. We rewrite to make it easier for others to understand, which in turn allows more confidence in our conclusions. Go through the checklist, and address all comments.

We also want to make sure that our code is satisfying best practice. Trisovic et al. (2022) details some core recommendations based on examining 9,000 R scripts, and a refactor is a great opportunity to ensure that these are met.

  1. Removing any absolute paths, and ensuring that only relative paths (in relation to the “.rproj” file) are used.
  2. Ensuring there is a clear order of execution. We have recommended using numbers in filenames to achieve this, but more sophisticated approaches, such as targets (Landau 2021), could be used instead.
  3. Having someone else run everything on their computer.

4.6.4 Parallel processing

Sometimes code is slow because the computer needs to do the same thing many times. Sometimes it is possible to take advantage of this and enable these jobs to be done at the same time using parallel processing. This will be especially useful in Chapter 14 for modelling.

To get started we can use tic() and toc() from tictoc (Izrailev 2014) to time various aspects of our code. This is useful with parallel processing, but also more generally, to help us find out where the largest delays are.

library(tictoc)
tic("First bit of code")
print("Fast code")
[1] "Fast code"
toc()
First bit of code: 0.003 sec elapsed
tic("Second bit of code")
Sys.sleep(3)
print("Slow code")
[1] "Slow code"
toc()
Second bit of code: 3.005 sec elapsed

And so we know that there is something slowing down the code; which in this artificial case is Sys.sleep() causing a delay of 3 seconds.

We could use parallel which is part of base R to run functions in parallel. We could also use future (Bengtsson 2021) which brings additional features. To get started with future we use plan() to specify whether we want to run things sequentially “sequential” or in parallel “multisession”. We then wrap whatever we want this applied to within future().

To see this in action we will create a dataset and then implement a function on a row-wise basis.

library(future)
library(tidyverse)
library(tictoc)

simulated_data <-
  tibble(
    random_draws = runif(
      n = 1000000,
      min = 0,
      max = 1000
    ) |>
      round(),
    more_random_draws = runif(
      n = 100000,
      min = 0,
      max = 1000
    ) |>
      round()
  )

plan(sequential)

tic()
simulated_data <-
  simulated_data |>
  rowwise() |>
  mutate(
    which_is_smaller =
      min(c(
        random_draws,
        more_random_draws
      ))
  )
toc()

plan(multisession)

tic()
simulated_data <-
  future(
    simulated_data |>
      rowwise() |>
      mutate(
        which_is_smaller =
          min(c(
            random_draws,
            more_random_draws
          ))
      )
  )
toc()

4.7 Exercises and tutorial

Exercises

  1. (Plan) Consider the following scenario: In a certain country there are only ever four parties that could win a seat in parliament. Whichever candidate has a plurality wins the seat. And the parliament made up of 175 seats. An analyst is interested in the number of votes for each party by seat. Please sketch what that dataset could look like, and then sketch a graph that you could build to show all observations.
  2. (Simulate) Please further consider the scenario described, and decide which of the following could be used to simulate the situation (select all that apply)?
    1. tibble(seat = rep(1:175, each = 4), party = rep(x = 1:4, times = 175), votes = runif(n = 175 * 4, min = 0, max = 1000) |> floor())
    2. tibble(seat = rep(1:175, each = 4), party = sample(x = 1:4, size = 175, replace = TRUE), votes = runif(n = 175 * 4, min = 0, max = 1000) |> floor())
    3. tibble(seat = rep(1:175, each = 4), party = rep(x = 1:4, times = 175), votes = sample(x = 1:1000, size = 175 * 4))
    4. tibble(seat = rep(1:175, each = 4), party = sample(x = 1:4, size = 175, replace = TRUE), votes = sample(x = 1:1000, size = 175 * 4))
  3. (Acquire) Please identify one possible source of actual data about voting in a country of interest to you.
  4. (Explore) Assume that tidyverse is loaded and the dataset “election_results” has the columns: “seat”, “party”, and “votes”. Which of the following would result a count of the number of seats won by each party (pick one)?
    1. election_results |> group_by(seat) |> slice_max(votes, n = 1) |> ungroup() |> count(party)
    2. election_results |> group_by(seat) |> slice_max(votes, n = 1) |> count(party)
    3. election_results |> group_by(party) |> slice_max(votes, n = 1) |> ungroup() |> count(party)
    4. election_results |> group_by(party) |> slice_max(votes, n = 1) |> count(seat)
  5. (Communicate) Please write two paragraphs as if you had gathered data from that source, and had built a graph. The exact details contained in the paragraphs do not have to be factual (i.e. you do not actually have to get the data nor create the graphs).
  6. According to Alexander (2019) research is reproducible if (pick one)?
    1. It is published in peer-reviewed journals.
    2. All of the materials used in the study are provided.
    3. It can be reproduced exactly without the authors providing materials.
    4. It can be reproduced exactly, given all the materials used in the study.
  7. According to the timeline of Gelman (2016), a) when did Paul Meehl identify various issues; and b) when did null hypothesis significance testing (NHST) become controversial (pick one)?
    1. 1970s-1980s; 1990s-2000.
    2. 1960s-1970s; 1980s-1990.
    3. 1970s-1980s; 1980s-1990.
    4. 1960s-1970s; 1990s-2000.
  8. Which of the following are components of the project layout recommended by Wilson et al. (2017) (select all that apply)?
    1. requirements.txt
    2. doc
    3. data
    4. LICENSE
    5. CITATION
    6. README
    7. src
    8. results
  9. Based on Alexander (2021) please write a paragraph about some of the barriers you overcame, or still face, with regard to sharing code that you wrote.
  10. According to Gelfand (2021), what is the key part of “If you need help getting unstuck, the first step is to create a reprex, or reproducible example. The goal of a reprex is to package your problematic code in such a way that other people can run it and feel your pain. Then, hopefully, they can provide a solution and put you out of your misery.” (pick one)?
    1. package your problematic code
    2. other people can run it and feel your pain
    3. the first step is to create a reprex
    4. they can provide a solution and put you out of your misery
  11. According to Gelfand (2021), what are the three key aspects of a reprex (select all that apply)?
    1. data
    2. only the libraries that are necessary and all the libraries that are necessary
    3. relevant code and only relevant code
  12. According to Wickham (2021) for naming files, how would these files “00_get_data.R”, “get data.R” be classified (pick one)?
    1. bad; bad.
    2. good; bad.
    3. bad; good.
    4. good; good.
  13. Which of the following would result in bold text in Quarto (pick one)?
    1. **bold**
    2. ##bold##
    3. *bold*
    4. #bold#
  14. Which option would hide the warnings in a Quarto R chunk (pick one)?
    1. echo: false
    2. eval: false
    3. warning: false
    4. message: false
  15. Which options would run the R code chunk and display the results, but not show the code in a Quarto R chunk (pick one)?
    1. echo: false
    2. include: false
    3. eval: false
    4. warning: false
    5. message: false
  16. Why are R Projects important (select all that apply)?
    1. They help with reproducibility.
    2. They make it easier to share code.
    3. They make your workspace more organized.
    4. They ensure reproducibility.
  17. Please discuss a circumstance in which an R Project would be useful.
  18. Consider this sequence: “git pull, git status, ________, git status, git commit -m "My message", git push”. What is the missing step (pick one)?
    1. git add -A.
    2. git status.
    3. git pull.
    4. git push.
  19. Assuming the packages and datasets have been loaded, what is the mistake in this code: DoctorVisits |> select("visits") (pick one)?
    1. "visits"
    2. DoctorVisits
    3. select
    4. |>
  20. What is a reprex and why is it important to be able to make one (select all that apply)?
    1. A reproducible example that enables your error to be reproduced.
    2. A reproducible example that helps others help you.
    3. A reproducible example during the construction of which you may solve your own problem.
    4. A reproducible example that demonstrates you have actually tried to help yourself.
  21. The following code produces an error. Please use reprex (Bryan et al. 2019) to build a reproducible example that you could use to get help with it, and submit the reprex using a GitHub Gist. You should simplify many aspects including reducing the number of packages, changing the dataset, and simplying the filter() and mutate().
library(tidyverse)

oecd_gdp <-
  read_csv("https://stats.oecd.org/sdmx-json/data/DP_LIVE/.QGDP.../OECD?contentType=csv&detail=code&separator=comma&csv-lang=en")

head(oecd_gdp)

library(forcats)
library(dplyr)

oecd_gdp_most_recent <-
  oecd_gdp |>
  filter(
    TIME == "2021-Q3",
    SUBJECT == "TOT",
    LOCATION %in% c(
      "AUS",
      "CAN",
      "CHL",
      "DEU",
      "GBR",
      "IDN",
      "ESP",
      "NZL",
      "USA",
      "ZAF"
    ),
    MEASURE == "PC_CHGPY"
  ) |>
  mutate(
    european = if_else(
      LOCATION %in% c("DEU", "GBR", "ESP"),
      "European",
      "Not european"
    ),
    hemisphere = if_else(
      LOCATION %in%
        c("CAN", "DEU", "GBR", "ESP", "USA"),
      "Northern Hemisphere",
      "Southern Hemisphere"
    ),
  )

library(ggplot)
library(patchwork)

oecd_gdp_most_recent |>
  ggplot(mapping = aes(x = LOCATION, y = Value)) |>
  geom_bar(stat = "identity")

Tutorial

Code review is an important part of working as a professional (Sadowski et al. 2018b). Please put together a small Quarto file that downloads a dataset using opendatatoronto, cleans it, and makes a graph. Then exchange it with someone else. Following the advice of Google (2022), please provide them with a review of their code. That should be at least two pages of single-spaced content. Submit the review as a PDF.

Paper

At about this point, Paper One, from Appendix C would be appropriate.