3  Reproducible workflows

Required material

Key concepts and skills

3.1 Introduction

The number one thing to keep in mind about machine learning is that performance is evaluated on samples from one dataset, but the model is used in production on samples that may not necessarily follow the same characteristics… So when asking the question, “would you rather use a model that was evaluated as 90% accurate, or a human that was evaluated as 80% accurate”, the answer depends on whether your data is typical per the evaluation process. Humans are adaptable, models are not. If significant uncertainty is involved, go with the human. They may have inferior pattern recognition capabilities (versus models trained on enormous amounts of data), but they understand what they do, they can reason about it, and they can improvise when faced with novelty.

François Chollet, 20 February 2020.

If science is about systematically building and organizing knowledge in terms of testable explanations and predictions, then data science takes this and focuses on data. This means that building, organizing, and sharing knowledge is a critical aspect. Creating knowledge, once, in a way that only you can do it, does not meet this standard. Hence, there is a need for reproducible data science workflows.

Alexander (2019) talks about how reproducible research means it can be exactly redone, given all the materials used. This underscores the importance of providing the code, data, and environment. The minimum expectation is that another person is independently able to use your code, data, and environment, to get your results, including figures and tables. Ironically, there are different definitions of reproducibility between disciplines. Barba (2018) surveys a variety of disciplines and concludes that the predominant language usage implies the following definitions: Reproducible research is when “[a]uthors provide all the necessary data and the computer codes to run the analysis again, re-creating the results.” A replication is a study “that arrives at the same scientific findings as another study, collecting new data (possibly with different methods) and completing new analyses.”

Regardless of what it is specifically called, Gelman (2016) identifies how large an issue this is in various social sciences. The problem with work that is not reproducible, is that it does not contribute to our stock of knowledge about the world, which is wasteful and potentially even unethical. Since Gelman (2016), a great deal of work has been done in many social sciences and the situation has improved a little, but much work remains. And the situation is similar in the life sciences (Heil et al. 2021) and computer science (Pineau et al. 2021).

Some of the examples that Gelman (2016) talks about are not that important in the scheme of things. But at the same time, we saw, and continue to see, similar approaches being used in areas with big impacts. For instance, many governments have created “nudge” units that implement public policy (Sunstein and Reisch 2017) even though there is compelling evidence that some of the claims lack credibility (Maier et al. 2022; Szaszi et al. 2022). Governments are increasingly using algorithms that they do not make open (Chouldechova et al. 2018). And Herndon, Ash, and Pollin (2014) document how a paper in economics that was used by governments to justify austerity policies following the Global Financial Crisis turned out to not be reproducible.

At a minimum, and with few exceptions, we must release our code, datasets, and environment. Without these, it is difficult to know what a finding speaks to (Miyakawa 2020). More banally, we also do not know if there are mistakes or aspects that were inadvertently overlooked (Merali 2010; Hillel 2017; Silver 2020). Increasingly, we consider a paper to be an advertisement, and for the associated code, data, and environment to be the actual work (Buckheit and Donoho 1995). Steve Jobs, a co-founder of Apple, talked about how the best craftsmen ensure that even the aspects of their work that no one else will ever see are as well-finished and high-quality as the aspects that are public facing (Isaacson 2011). The same is true in data science, where often one of the distinguishing aspects of high-quality work is that the README and code comments are as polished as, say, the abstract of the associated paper.

Workflows exist within a cultural and social context, which imposes an additional ethical reason for the need for them to be reproducible. For instance, Wang and Kosinski (2018) use deep neural networks to train a model to distinguish between gay and heterosexual men. (Murphy (2017) provides a summary of the paper, the associated issues, and comments from its authors). To do this, Wang and Kosinski (2018, 248) needed a dataset of photos of people that were “adult, Caucasian, fully visible, and of a gender that matched the one reported on the user’s profile”. They verified this using Amazon Mechanical Turk, an online platform that pays workers a small amount of money to complete specific tasks. The instructions provided to the Mechanical Turk workers for this task specify that Barack Obama, the 44th US President, who had a white mother and a black father, should be classified as “Black”; and that Latino is an ethnicity, rather than a race (Mattson 2017). The classification task may seem objective, but, perhaps unthinkingly, echoes the views of Americans with a certain class and background.

This is just one specific concern about one part of the Wang and Kosinski (2018) workflow. Broader concerns are raised by others including Gelman, Mattson, and Simpson (2018). The main issue is that statistical models are specific to the data on which they were trained. And the only reason that we can identify likely issues in the model of Wang and Kosinski (2018) is because, despite not releasing the specific dataset that they used, they were nonetheless open about their procedure. For our work to be credible, it needs to be reproducible by others.

Some of the steps that we can take to make our work more reproducible include:

  1. Ensure the entire workflow is documented. This may involve addressing questions such as:
    • How was the raw dataset obtained and is access likely to be persistent and available to others?
    • What specific steps are being taken to transform the raw data in the data that were analyzed, and how can this be made available to others?
    • What analysis has been done, and how clearly can this be shared?
    • How has the final paper or report been built and to what extent can others follow that process themselves?
  2. Not worrying about perfect reproducibility initially, but instead focusing on trying to improve with each successive project. For instance, each of the following requirements are increasingly more onerous and there is no need to be concerned about not being able to the last, until we can do the first:
    • Can you run your entire workflow again?
    • Can another person run your entire workflow again?
    • Can “future-you” run your entire workflow again?
    • Can “future-another-person” run your entire workflow again?
  3. Including a detailed discussion about the limitations of the dataset and the approach in the final paper or report.

The workflow that we advocate in this book is: \[ \mbox{Plan}\rightarrow\mbox{Simulate}\rightarrow\mbox{Acquire}\rightarrow\mbox{Explore}\rightarrow\mbox{Share} \] But it can be alternatively considered as: “Think an awful lot, mostly read and write, sometimes code”.

There are various tools that we can use at the different stages that will improve the reproducibility of this workflow. This includes Quarto, R Projects, and Git and GitHub.

3.2 Quarto

3.2.1 Getting started

Quarto integrates code and natural language in a way that is called “literate programming” (Knuth 1984). It is the successor to R Markdown, which was a variant of Markdown specifically designed to allow R code chunks to be included. Quarto uses a mark-up language similar to HyperText Markup Language (HTML) or LaTeX, in comparison to a “What You See Is What You Get” (WYSIWYG) language, such as Microsoft Word. This means that all the aspects are consistent, for instance, all top-level headings will look the same. But it means that we use symbols to designate how we would like certain aspects to appear. And it is only when we build the mark-up that we get to see what it looks like. A visual editor option can also be used, and this hides the need for the user to do this mark-up themselves.

While it makes sense to use Quarto going forward, there are still a lot of resources written for and in R Markdown. For this reason we provide the R Markdown equivalents for this section in Appendix C.

Shoulders of giants

Fernando Pérez is an associate professor, in statistics, at the University of California, Berkeley and a Faculty Scientist, Data Science and Technology Division, at Lawrence Berkeley National Laboratory. He earned a PhD in particle physics from the University of Colorado, Boulder. During his PhD he created iPython, which enables Python to be used interactively, and now underpins Project Jupyter, which inspired similar notebook approaches such as R Markdown and now Quarto. Somers (2018) describes how open-source notebook approaches create virtuous feedback loops that result in dramatically improved scientific computing. And Romer (2018) aligns the features of open-source approaches, such as Jupyter, with the features that enable scientific consensus and progress. In 2017 Pérez was awarded the ACM Software System Award.

One advantage of literate programming is that we get a “live” document in which code executes and then forms part of the document. Another advantage of Quarto is that very similar code can compile into a variety of documents, including HTML pages and PDFs. Quarto also has default options set up for including title, author, and date sections. One disadvantage is that it can take a while for a document to compile because all the code needs to run. Tierney (2022) provides an especially useful and detailed Quarto usage guide.

We need to download Quarto from here. (Skip this step if you are using Posit Cloud because it is already installed.) We can then create a new Quarto document within RStudio (“File” -> “New File” -> “Quarto Document…”).

After opening a new Quarto document and selecting “Source” view, you will see the default top matter, contained within a pair of three dashes, as well as some examples of text showing a few of the markdown essential commands and R chunks, each of which are discussed further in the following sections.

3.2.2 Top matter

Top matter consists of defining aspects such as the title, author, and date. It is contained within three dashes at the top of a Quarto document. For instance, the following would specify a title, date that automatically updated to the date the document was rendered, and an author.

title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
format: html

An abstract is a short summary of the paper, and we could add that to the top matter as well.

title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
abstract: "This is my abstract."
format: html

By default, Quarto will create an HTML document, but we can change the output format to produce a PDF. This uses LaTeX in the background and may require the installation of supporting packages. It is common to need to first install tinytex (Xie 2019).

title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
abstract: "This is my abstract."
format: pdf

We can include references by specifying a BibTeX file in the top matter and then calling it within the text, as needed.

title: "My document"
author: "Rohan Alexander"
date: format(Sys.time(), "%d %B %Y")
format: pdf
abstract: "This is my abstract."
bibliography: bibliography.bib

We would need to make a separate file called “bibliography.bib” and save it next to the Quarto file. In the BibTeX file we need an entry for the item that is to be referenced. For instance, the citation for R can be obtained with citation() and this can be added to the “bibliography.bib” file. Similarly, the citation for a package can be found by including the package name, for instance citation("tidyverse"). It can be helpful to use Google Scholar, or doi2bib, to get citations for books or articles.

    title = {R: A Language and Environment for Statistical Computing},
    author = {{R Core Team}},
    organization = {R Foundation for Statistical Computing},
    address = {Vienna, Austria},
    year = {2021},
    url = {https://www.R-project.org/},
    title = {Welcome to the {tidyverse}},
    author = {Hadley Wickham and Mara Averick and Jennifer Bryan and...},
    year = {2019},
    journal = {Journal of Open Source Software},
    volume = {4},
    number = {43},
    pages = {1686},
    doi = {10.21105/joss.01686},

We need to create a unique key that we use to refer to this item in the text. This can be anything, provided it is unique, but meaningful ones can be easier to remember, for instance “citeR”.

    title = {R: A Language and Environment for Statistical Computing},
    author = {{R Core Team}},
    organization = {R Foundation for Statistical Computing},
    address = {Vienna, Austria},
    year = {2021},
    url = {https://www.R-project.org/},
    title = {Telling Stories with Data},
    author = {Rohan Alexander},
    year = {2022},
    publisher = {CRC Press},
    url = {https://tellingstorieswithdata.com}

To cite R in the Quarto document we include @citeR, which would put the brackets around the year, like this: R Core Team (2022), or [@citeR], which would put the brackets around the whole thing, like this: (R Core Team 2022).

The reference list at the end of the paper is automatically built based on calling the BibTeX file and including the references in the paper. At the end of the Quarto document, including a heading “# References” and the actual citations will be included after that. When the Quarto file is rendered, Quarto sees these in the content, goes to BibTeX to get the reference details that it needs, builds the reference list, and then adds it to the end of the rendered document.

3.2.3 Essential commands

Like R Markdown, Quarto uses a variation of Markdown as its underlying syntax. Essential markdown commands include those for emphasis, headers, lists, links, and images. A reminder of these is included in RStudio (“Help” -> “Markdown Quick Reference”). It is your choice as to whether you want to use the visual or source editor. But either way, it is good to understand these essentials because it will not always be possible to use a visual editor, for instance if you are quickly looking at a Quarto document in GitHub. Also RStudio is sometimes laggy and it can be useful to use a text editor, such as Sublime Text, or VS Code.

  • Emphasis: *italic*, **bold**
  • Headers (these go on their own line with a blank line before and after):
         # First level header
         ## Second level header
         ### Third level header
  • Unordered list, with sub-lists:
    * Item 1
    * Item 2
        + Item 2a
        + Item 2b
  • Ordered list, with sub-lists:
    1. Item 1
    2. Item 2
    3. Item 3
        + Item 3a
        + Item 3b
  • URLs can be added: [this book](https://www.tellingstorieswithdata.com) results in this book.
  • A paragraph is created by leaving a blank line.
A paragraph about an idea, nicely spaced from the following paragraph.

A paragraph about another idea, again spaced from the earlier paragraph.

Once we have added some aspects, then we may want to see the actual document. To build the document click “Render”.

3.2.4 R chunks

We can include code for R and many other languages in code chunks within a Quarto document. Then when we render the document, the code will run and be included in the document.

To create an R chunk, we start with three backticks and then within curly braces we tell Quarto that this is an R chunk. Anything inside this chunk will be considered R code and run as such. For instance, we could load the tidyverse and AER and make a graph of the number of times a survey respondent visited the doctor in the past two weeks.


data("DoctorVisits", package = "AER")

DoctorVisits |>
  ggplot(aes(x = illness)) +
  geom_histogram(stat = "count")

The output of that code is Figure 3.1.

Figure 3.1: Number of illnesses in the past two weeks, based on the 1977–1978 Australian Health Survey

There are various evaluation options that are available in chunks. We include these, each on a new line, by opening the line with the chunk-specific comment delimiter “#|” and then the option. Helpful options include:

  • echo: This controls whether the code itself is included in the document. For instance, echo: false would mean the code will be run and its output will show, but the code itself would not be included in the document.
  • include: This controls whether the output of the code is included in the document. For instance, include: false would run the code, but would not result in any output, and the code itself would not be included in the document.
  • eval: This controls whether the code should be included in the document. For instance, eval: false would mean that the code is not run, and hence there would not be any output to include, but the code itself would be included in the document.
  • warning: This controls whether warnings should be included in the document. For instance, warning: false would mean that warnings are not included.
  • message: This controls whether messages should be included in the document. For instance, message: false would mean that messages are not included in the document.

For instance, we could include the output, but not the code, and suppress any warnings.

#| echo: false
#| warning: false


data("DoctorVisits", package = "AER")

DoctorVisits |>
  ggplot(aes(x = visits)) +
  geom_histogram(stat = "count")

It is important to leave a blank line on either side of an R chunk, otherwise it may not run properly. It is also important that lower case is used for logical values, i.e. “false” not “FALSE”.

Most people did not visit a doctor in the past week.

#| echo: false
#| warning: false


data("DoctorVisits", package = "AER")

DoctorVisits |>
  ggplot(aes(x = visits)) +
  geom_histogram(stat = "count")

There were some people that visited a doctor once, and then...

It is also important that the Quarto document itself loads any datasets that are needed. It is not enough that they are in the environment. This is because the Quarto document evaluates the code in the document when it is built, not necessarily the environment.

Often when writing code, we may want to make the same change across multiple lines or change all instances of a particular thing. We achieve this with multiple cursors. If we want a cursor across multiple, consecutive lines, then hold option on Mac or Alt on PC, while you drag your cursor over the relevant lines. If you want to select all instances on a particular thing, then highlight one instance, say a variable name, then use Find/Replace (Command + F on Mac or CTRL + F on PC) and select “All”. This will then enable a cursor at all instances of that thing.

3.2.5 Cross-references

It can be useful to cross-reference figures, tables, and equations. This makes it easier to refer to them in the text. To do this for a figure we refer to the name of the R chunk that creates or contains the figure. For instance, consider the following code.

#| label: fig-uniquename
#| fig-cap: Number of illnesses in the past two weeks, based on the 1977--1978 Australian Health Survey
#| warning: false


data("DoctorVisits", package = "AER")

DoctorVisits |>
  ggplot(aes(x = illness)) +
  geom_histogram(stat = "count")

Figure 3.2: Number of illnesses in the past two weeks, based on the 1977–1978 Australian Health Survey

Then (@fig-uniquename) would produce: (Figure 3.2) as the name of the R chunk is fig-uniquename. We need to add “fig” to the start of the chunk name so that Quarto knows that this is a figure. We then include a “fig-cap:” in the R chunk that specifies a caption.

We can add #| layout-ncol: 2 in an R chunk within a Quarto document to have two graphs appear side-by-side (Figure 3.3). Here Figure 3.3 (a) uses the minimal theme, and Figure 3.3 (b) uses the classic theme. These both cross-reference the same label #| label: fig-doctorgraphsidebyside in the R chunk, with an additional option added in the R chunk of #| fig-subcap: ["Number of illnesses","Number of visits to the doctor"] which provides the sub-cations. The addition of letters in-text is accomplished, by adding “-1” and “-2” to the end of the label when it is used in-text: (@fig-doctorgraphsidebyside), @fig-doctorgraphsidebyside-1, and @fig-doctorgraphsidebyside-2.

#| eval: true
#| warning: false
#| label: fig-doctorgraphsidebyside
#| fig-cap: "Two variants of Doctor Visits"
#| fig-subcap: ["Illnesses","Visits to the doctor"]
#| layout-ncol: 2

DoctorVisits |>
  ggplot(aes(x = illness)) +
  geom_histogram(stat = "count")

DoctorVisits |>
  ggplot(aes(x = visits)) +
  geom_histogram(stat = "count")

(a) Illnesses

(b) Visits to the doctor

Figure 3.3: Two variants of Doctor Visits

We can take a similar approach to cross-reference tables. For instance, (@tbl-docvisittable) will produce: (Table 3.1). In this case we specify “tbl” at the start of the label so that Quarto knows that it is a table. And we specify a caption for the table with “tbl-cap:”.

#| label: tbl-docvisittable
#| tbl-cap: "Number of visits to the doctor in the past two weeks, based on the 1977--1978 Australian Health Survey"

DoctorVisits |>
  count(visits) |>
Table 3.1: Number of visits to the doctor in the past two weeks, based on the 1977–1978 Australian Health Survey
visits n
0 4141
1 782
2 174
3 30
4 24
5 9
6 12
7 12
8 5
9 1

Finally, we can also cross-reference equations. To that we need to add a tag {#eq-macroidentity} which we then reference.

Y = C + I + G + (X - M)
$$ {#eq-gdpidentity}

For instance, we then use @eq-gdpidentity to produce Equation 3.1

\[ Y = C + I + G + (X - M) \tag{3.1}\]

When using cross-references, it is important that the labels are relatively simple. In general, try to keep the names simple but unique, avoid punctuation and stick to letters and hyphens. Do not use underscores, because they can cause an error.

3.3 R Projects and file structure

Projects are widely used in software development and exist to keep all the files (data, analysis, report, etc) associated with a particular project together and related to each other. This use of “project” in a software development sense, is distinct to a “project”, in the project management sense. An R Project can be created in RStudio “File” -> “New Project”, then select “Empty project”, name the R Project and decide where to save it. For instance, a R Project focused on maternal mortality, may be called “maternalmortality”. The use of R Projects enables “reliable, polite behavior across different computers or users and over time.” (Bryan and Hester 2020). This is because they remove the context of that folder from its broader existence; files exist in relation to the base of the R Project, not the base of the computer.

Once a project has been created, a new file with the extension “.RProj” will appear in that folder. As an example, of a folder with an R Project, an example Quarto document, and an appropriate file structure is available here. That can be downloaded: “Code” -> “Download ZIP”.

The main advantage of using an R Project is that we can reference files within it in a self-contained way. That means when others want to reproduce our work, they will not need to change all the file references and structure as everything is referenced in relation to the “.Rproj” file. For instance, instead of reading a CSV from, say, "~/Documents/projects/book/data/" you can read it in from book/data/. It may be that someone else does not have a “projects” folder, and so the former would not work for them, while the latter would.

The use of R Projects is required to meet the minimal level of reproducibility. The use of functions such as setwd(), and computer-specific file paths, bind work to a specific computer in a way that is not appropriate. Trisovic et al. (2022) describe the use of absolute paths, rather than relative paths, as a common error that they had to correct in their large-scale study of R code, uploaded to the Harvard Dataverse, that underpins research papers.

There are a variety of ways to set-up a folder. A variant of Wilson et al. (2017) that is often useful is shown in the example file structure. Here we have an “inputs” folder that contains raw data (which should never be modified (Wilson et al. 2017)) and literature related to the project (which cannot be modified). An “outputs” folder contains data that we create using R, as well as the paper that we are writing. And a “scripts” folder is what modifies the raw data and saves it into “outputs”. We will do most of our work in “scripts”, and the Quarto file for the paper in “outputs”. Useful other aspects include a “README.md” which will specify overview details about the project, and a LICENSE. An example of what to put in the README is here. Another helpful variant of this project skeleton is provided by Mineault and The Good Research Code Handbook Community (2021).

3.4 Version control

We implement version control through a combination of Git and GitHub. There are a variety of reasons for this including:

  1. Enhancing the reproducibility of work by making it easier to share code and data;
  2. Making it easier to share work;
  3. Improving workflow by encouraging systematic approaches; and
  4. Making it easier to work in teams.

Git is a version control system with a fascinating history (Brown 2018). The way one often starts doing version control is to have various versions of the one file: “first_go.R”, “first_go-fixed.R”, “first_go-fixed-with-mons-edits.R”. But this soon becomes cumbersome. One often soon turns to dates, for instance: “2022-01-01-analysis.R”, “2022-01-02-analysis.R”, “2022-01-03-analysis.R”, etc. While this keeps a record it can be difficult to search when we need to go back, because it can be difficult to remember the date some change was made. In any case, it quickly gets unwieldy for a project that is being regularly worked on.

Instead of this, we use Git so that we can have one version of the file, say, “analysis.R” and then use Git to keep a record of the changes to that file, and a snapshot of that file at a given point in time. We determine when Git takes that snapshot, and when we take that snapshot. We additionally include a message saying what changed between this snapshot and the last. In that way, there is only ever one version of the file, but the history can be more easily searched.

One complication is that Git was designed for teams of software developers. As such, while it works, it can be a little ungainly for non-developers. But in general it is the case that Git has been usefully adapted for data science, even when the only collaborator one may have is one’s future self (Bryan 2018a).

GitHub, GitLab, and various other companies offer easier-to-use services that build on Git. While there are tradeoffs, we introduce GitHub here because it is the predominant platform (Eghbal 2020, 21). Git and GitHub are built into Posit Cloud, which provides a nice option if you have issues with local installation. One of the initial challenging aspects of Git is the terminology. Folders are called “repos”. Creating a snapshot is called a “commit”. One gets used to it eventually, but feeling confused initially is normal. Bryan (2020) is especially useful for setting up and using Git and GitHub.

3.4.1 Git

We first need to git check whether Git is installed. Open RStudio, go to the Terminal, type the following, and then enter/return.

git --version

If you get a version number, then you are done (Figure 3.4 (a)).

(a) Using Terminal to check whether Git is installed in RStudio

(b) Adding a username and email address to Git in RStudio

Figure 3.4: An overview of the steps involved in setting up Git

Git is pre-installed in Posit Cloud, it should be pre-installed on Mac, and it may be pre-installed on Windows. If you do not get a version number in response, then you need to install it. To do that you should follow the instructions specific to your operating system in Bryan (2020, chap. 5).

Given Git is installed we need to tell it our username and email. We need to do this because Git adds this information whenever we take a “snapshot”, or to use Git’s language, whenever we make a commit.

Again, within the Terminal, type the following, replacing the details with yours, and then enter/return after each line.

git config --global user.name "Rohan Alexander"
git config --global user.email "rohan.alexander@utoronto.ca"
git config --global --list

When this set-up has been done properly, the values that you entered for “user.name” and “user.email” will be returned after the last line (Figure 3.4 (b)).

These details–username and email address–will be public. There are various ways to hide the email address if necessary, and GitHub provides instructions about this. Bryan (2020, chap. 7) provides more detailed instructions about this step, and a trouble-shooting guide.

3.4.2 GitHub

Now that Git is set-up, we need to set-up GitHub. We created an account in Chapter 2, which we use again here. After being signed in we first need to make a new folder, which is called a “repo” in Git. Look for a “+” in the top right, and then select “New Repository” (Figure 3.5 (a)).

(a) Start process of creating a new repository

(b) Creating a new repository in GitHub

(c) Copy the URL of the new repository

(d) Adding the project to Posit Cloud

(e) Creating a PAT

(f) Adding files to be committed

(g) Making a commit

Figure 3.5: An overview of the steps involved in setting up GitHub

At this point we can add a sensible name for the repo. Leave it as “public” for now, because it can always be deleted later. And check the box to “Initialize this repository with a README”. Change “Add .gitignore” to R. After that, click “Create repository” (Figure 3.5 (b)).

This will take us to a screen that is fairly empty, but the details that we need are in the green “Clone or Download” button, which we can copy by clicking the clipboard (Figure 3.5 (c)).

Now returning to RStudio, in Posit Cloud, we create a “New Project” using “New Project from Git Repository”. It will ask for the URL that we just copied (Figure 3.5 (d)). If you are using a local computer, then this same step is accomplished through the menu: “File” -> “New Project…” -> “Version Control” -> “Git”, then paste in the URL, give the folder a meaningful name, check “Open in new session”, then “Create Project”.

At this point, a new folder has been created locally that we can use. We will want to be able to push it back to GitHub, and for that we will need to use a Personal Access Token (PAT) to link our RStudio Workspace with our GitHub account. We use usethis (Wickham, Bryan, and Barrett 2022) and gitcreds (Csárdi 2022) to enable this. These are, respectively, a package that automates repetitive tasks, and a package that authenticates with GitHub. To create a PAT, while signed into GitHub in the browser, and after installing usethis run usethis::create_github_token() in your R session. GitHub will open in the browser with various options filled out (Figure 3.5 (e)). It can be useful to give the PAT an informative name by replacing “Note”, for instance “PAT for RStudio”, then “Generate token”.

We only have one chance to copy this token, and if we make a mistake then we will need to generate a new one. Do not include the PAT in any R script or Quarto document. Instead, after installing gitcreds, run gitcreds::gitcreds_set(), which will then prompt you to add your PAT in the console.

To use GitHub for a project that we are actively working on we follow a procedure:

  1. The first thing to do is almost always to get any changes with “pull”. To do this, open the Git pane in RStudio, and click the blue down arrow. This gets any changes to the folder, as it is on GitHub, into our own version of the folder.
  2. We can then make our changes to our copy of the folder. For instance, we could update the README, and then save it as normal.
  3. Once this is done, we need to “add”, “commit”, and “push”. In the Git pane in RStudio, select the files to be added. This adds them to the staging area. Then click “Commit” (Figure 3.5 (f)). A new window will open. Add a commit message which is informative about the change that was made, and then click “Commit” in that new window (Figure 3.5 (g)). Finally, click “Push” to send the changes to GitHub.

There are a few common pain-points when it comes to Git and GitHub. We recommend committing and pushing regularly, especially when you are new to version control. This increases the number of snapshots that you could come back to if needed. All commits should have an informative commit message. If you are new to version control, then the expectation of a good commit message is that it contains a short summary of the change, followed by a blank line, and then an explanation of the change including what the change is, and why it is being made. For instance, if your commit adds graphs to a paper, then a commit message could be:

Add graphs

Graphs of unemployment and inflation added into Data section.

There is some evidence of a relationship between overall quality and commit behavior (Sprint and Conci 2019). In an ideal scenario the commit messages act as a kind of journal of the project.

Git and GitHub were designed for software developers, rather than data scientists. GitHub limits the size of the files it will consider to 100MB, and even 50MB will prompt a warning. Data science projects regularly involve datasets that are larger than this. In Chapter 10 we discuss the use of data deposits, which can be especially useful when a project is completed, but when we are actively working on a project it can be useful to ignore the file, at least as far as Git and GitHub are concerned. We do this using a “.gitignore” file, in which we list all of the files that we do not want to track using Git. The starter folder contains an example “.gitignore” file. And it can be helpful to run usethis::git_vaccinate(), which will add a variety of files to a global “.gitignore” file in case you forget to do it on a project-basis. Mac users will find it useful that this will cause “.DS_Store” files to be ignored.

We used the Git pane in RStudio which removed the need to use the Terminal, but it did not remove the need to go to GitHub and set-up a new project. Having set-up Git and GitHub, we can further improve this aspect of our workflow with usethis (Wickham, Bryan, and Barrett 2022).

First check that Git is set-up with usethis::git_sitrep(). This should print information about the username and email. We can use usethis::use_git_config() to update these details if needed.

  user.name = "Rohan Alexander",
  user.email = "rohan.alexander@utoronto.ca"

Rather than starting a new project in GitHub, and then adding it locally, we can now use usethis::use_git() to initiate it and commit the files. Having committed, we can use usethis::use_github() to push to GitHub, which will create the folder on GitHub as well.

3.5 Using R in practice

3.5.1 Dealing with errors

When you are programming, eventually your code will break, when I say eventually, I mean like probably 10 or 20 times a day.

Gelfand (2021)

Everyone who uses R, or any programming language for that matter, has trouble find them at some point. This is normal. Programming is hard. At some point code will not run or will throw an error. This happens to everyone. It is common to get frustrated, but to move forward we develop strategies to work through the issues:

  1. If you are getting an error message, then sometimes it will be useful. Try to read it carefully to see if there is anything of use in it.
  2. Try to search, say on Google, for the error message. It can be useful to include “tidyverse” or “in R” in the search to help make the results more appropriate. Sometimes Stack Overflow results can be useful.
  3. Look at the help file for the function, by putting “?” before the function, for instance, ?pivot_wider(). A common issue is to use a slightly incorrect argument name or format, such as accidentally including a string instead of an object name.
  4. Look at where the error is happening and remove or comment out code until the error is resolved, and then slowly add code back again.
  5. Check the class of the object, with class(), for instance, class(data_set$data_column). Ensure that it is what is expected.
  6. Restart R (“Session” -> “Restart R and Clear Output”) and load everything again.
  7. Restart the computer.
  8. Search for what you are trying to do, rather than the error, being sure to include “tidyverse” or “in R” in the search to help make the results more appropriate. For instance, “save PDF of graph in R made using ggplot”. Sometimes there are relevant blog posts or Stack Overflow answers that will help.
  9. Making a small, self-contained, reproducible example “reprex” to see if the issue can be isolated and to enable others to help.

More generally, while this is rarely possible to do, it is almost always helpful to take a break and come back the next day.

3.5.2 Reproducible examples

No one can advise or help you—no one. There is only one thing you should do. Go into yourself.

Rilke (1929)

Asking for help is a skill like any other. We get better at it with practice. It is important to try not to say “this doesn’t work”, “I tried everything”, “your code does not work”, or “here is the error message, what do I do?”. In general, it is not possible to help based on these comments, because there are too many possible issues. You need to make it easy for others to help you. This involves a few steps.

  1. Provide a small, self-contained, example of your data, and code, and detail what is going wrong.
  2. Document what you have tried so far, including which Stack Overflow and RStudio Community pages have you looked at, and why are they not quite what you are after?
  3. Be clear about the outcome that you would like.

Begin by creating a minimal REPRoducible EXample–a “reprex”. This is code that contains what is needed to reproduce the error, but only what is needed. This means that the code it likely a smaller, simpler, version that nonetheless reproduces the error.

Sometimes this process enables one to solve the problem. If it does not, then it gives someone else a fighting chance of being able to help. There is almost no chance that you have got a problem that someone has not addressed before. It is more likely that the main difficulty is in trying to communicate what you are trying to do and what is happening, in a way that allows others to recognize both. Developing tenacity is important.

To develop reproducible examples, reprex (Bryan et al. 2019) is especially useful. To use it we:

  1. Load the reprex package: library(reprex).
  2. Highlight, and copy, the code that is giving issues.
  3. Run reprex() in the Console.

If the code is self-contained, then it will preview in the Viewer. If it is not, then it will error, and the code needs to be re-written so that it is self-contained.

If you need data to reproduce the error, then you should use data that is built into R. There are a large number of datasets that are built into R and can be seen using library(help = "datasets"). But if possible, you should use a common option such as “mtcars” or “faithful”. Combining a reprex with a GitHub Gist that was introduced in Chapter 2, increases the chances that someone is able to help you.

3.5.3 Mentality

(Y)ou are a real, valid, competent user and programmer no matter what IDE you develop in or what tools you use to make your work work for you

(L)et’s break down the gates, there’s enough room for everyone

Sharla Gelfand, 10 March 2020.

If you write code, then you are a programmer, regardless of how you do it, what you are using it for, or who you are. But there are a few traits that one tends to notice great programmers have in common.

  • Focused: Often having an aim to “learn R” or something similar tends to be problematic, because there is no real end point to that. It tends to be more efficient to have smaller, more specific goals, such as “make a histogram about the 2022 Australian Election with ggplot”. This is something that can be focused on and achieved in a few hours. The issue with goals that are more nebulous, such as “I want to learn R”, is that it becomes easy to get lost on tangents, much more difficult to get help. This can be demoralizing and lead to people quitting too early.
  • Curious: It is almost always useful to “have a go”; that is, if you are not sure, then just try it. In general, the worst that happens is that you waste your time. You can rarely break something irreparably with code. For instance, if you want to know what happens if you pass a “vector” instead of a “dataframe” to ggplot() then try it.
  • Pragmatic: At the same time, it can be useful to stick within reasonable bounds, and make one small change each time. For instance, say you want to run some regressions, and are curious about the possibility of using the tidymodels package (Kuhn and Wickham 2020) instead of lm(). A pragmatic way to proceed is to use one aspect from the tidymodels package initially and then make another change next time.
  • Tenacious: Again, this is a balancing act. There are always unexpected problems and issues with every project. On the one hand, persevering despite these is a good tendency. But on the other hand, sometimes one does need to be prepared to give up on something if it does not seem like a break-through is possible. Here mentors can be useful as they tend to be a better judge of what is reasonable. It is also where appropriate planning is useful.
  • Planned: It is almost always useful to excessively plan what you are going to do. For instance, you may want to make a histogram of the 2019 Canadian Election. You should plan the steps that are needed and even to sketch out how each step might be implemented. For instance, the first step is to get the data. What packages might be useful? Where might the data be? What is the back-up plan if the data do not exist there?
  • Done is better than perfect: We all have various perfectionist tendencies to a certain extent, but it can be useful to initially try to turn them off to a certain extent. In the first instance, try to write code that works, especially in the early days. You can always come back and improve aspects of it. But it is important to actually ship. Ugly code that gets the job done is better than beautiful code that is never finished.

3.5.4 Code comments and style

Code must be commented. Comments should focus on why certain code was written, (and to a lesser extent, why a common option is not selected). Indeed, it can be a good idea to write the comments before you write the code, explaining what you want to do and why, and then returning to write the code (Fowler and Beck 2018, 59).

There is no one way to write code, especially in R. However, there are some general guidelines that will make it easier for you even if you are just working on your own. It is important to recognize that most projects will evolve over time, and one purpose served by code comments are as “[m]essages left for your future self (or near-future others) [that] help retrace and justify your decisions” (Bowers and Voors 2016).

Comments in R script files can be added by including the # symbol. (The behavior of # is different for lines inside an R chunk, where it acts as a comment, compared with lines outside an R chunk where it sets heading levels.) We do not have to put a comment at the start of the line, it can be midway through. In general, we do not need to comment what every aspect of your code is doing but we should comment parts that are not obvious. For instance, if we read in some value then we may like to comment where it is coming from.

You should comment why you are doing something (Wickham 2021). What are you trying to achieve?

You must comment to explain weird things. Like if you are removing some specific row, say row 27, then why are you removing that row? It may seem obvious in the moment, but future-you in six months will not remember.

You should break your code into sections. For instance, setting up the workspace, reading in datasets, manipulating and cleaning the dataset, analyzing the datasets, and finally producing tables and figures. Each of these should be separated with comments explaining what is going on, and sometimes into separate files, depending on the length.

Additionally, at the top of each file it is important to note basic information, such as the purpose of the file, and pre-requisites or dependencies, the date, the author and contact information, and finally and red-flags or todos.

At the very least every R script needs a preamble and a clear demarcation of sections.

#### Preamble ####
# Purpose: Brief sentence about what this script does
# Author: Your name
# Data: The date it was written
# Contact: Add your email
# License: Think about how your code may be used
# Pre-requisites: 
# - Maybe you need some data or some other script to have been run?

#### Workspace setup ####
# do not keep the install.packages line - comment out if need be
# Load packages

# Read in the raw data. 
raw_data <- readr::read_csv("inputs/data/raw_data.csv")

#### Next section ####

Examples of nicely commented code include those from: Bob Nystrom, Dolatsara et al. (2021), Mathew Hauer and James Byars, and Jason Burton, Nicole Cruz, and Ulrike Hahn (Burton, Cruz, and Hahn 2021).

Finally, never rely on a user commenting and uncommenting code, or any other manual step, such as directory specification, for code to work. This will preclude the use of automated code checking and testing. This all takes time. As a rough rule of thumb, you should expect to spend at least as much time commenting and improving your code as you spent writing it.

3.5.5 Tests

Tests need to be written throughout the code, and we need to write them as we go, not all at the end. Will this slow you down? Yes. But it will help you to think, and to fix mistakes, which will make your code better and lead to better overall productivity. Code without tests needs to be viewed with suspicion and not given the benefit of the doubt. The need for other people, and ideally, automated processes, to run tests on code is one reason that we emphasize reproducibility, as well as smaller aspects such as not hardcoding file-paths, using projects, and not having spaces in file names.

It is difficult to define a complete and general, suite of tests, but broadly we want to test:

  1. boundary conditions,
  2. classes,
  3. missing data,
  4. the number of observations and variables,
  5. duplicates, and
  6. regression results.

We do all this initially on our simulated data and then move to the real data. It is possible to write an infinite number of tests, but a smaller number of high-quality tests is much better than many thoughtless tests.

One type of test is “assertions”. Assertions are written throughout the code to check whether something is true and stop the code from running if not (Irving et al. 2021, 272). For instance, you might assert that an object should have class numeric and if it was tested against this assertion and found to have a character class instead, the test would be failed and the script would stop running. Assertion tests in data science will typically be used in data cleaning and preparation scripts. We have more to say about these in Chapter 9. Unit tests check some complete aspect of code (Irving et al. 2021, 274). We will consider them more in Chapter 12 when we consider modelling.

3.6 Efficiency

Generally in this book we are, and will continue to be, concerned with just getting something done. Not necessarily getting it done in the best or most efficient way, because to a large extent, being worried about that is a waste of time. For the most part one is better off just pushing things into the cloud, letting them run for a reasonable time, and using that time to worry about other aspects of the pipeline. But that eventually becomes unfeasible. At a certain point, and this differs depending on context, efficiency becomes important. Eventually ugly or slow code, and dogmatic insistence on a particular way of doing things, have an effect. And it is at that point that one needs to be open to new approaches to ensure efficiency. There is rarely a most common area for obvious performance gains. Instead, it is important to develop the ability to measure, evaluate, and think.

One of the best ways to improve the efficiency of our code is preparing it in such a way that we can bring in a second pair of eyes. To make the most of their time, it is important that our code easy to read. So we start with “code linting” and “styling”. This does not speed up our code, per se, but we instead makes it more efficient when another person comes to it, or we revisit it. This enables formal code review and refactoring, which is where we re-write code to make it better, while not changing what it does (it does the same thing, but in a different way). We then turn to measurement of run time, and introduce parallel processing, where we allow our computer to run code multiple processes at the same time

3.6.1 Sharing a code environment

We have discussed at length the need to share code, and we have put forward an approach to this using GitHub. And in Chapter 10, we will discuss sharing data. But, there is another requirement to enable other people to run our code. In Chapter 2 we discussed how R itself, as well as R packages update from time to time, as new functionality is developed, errors fixed, and other general improvements. And in Appendix A we describe how one advantage of the tidyverse is that it can update faster than base, because it is more specific. However this could mean that even if we were to share all the code and data that we use, it is possible that the software versions that are available would mean the code would not run.

The solution to this is to detail the environment that was used. There is a large number of ways to do this, and they can add a great deal of complexity, but the minimum standard is to document the version of R and R packages that were used, and make it easier for others to install that exact version. We use renv (Ushey 2022) to do this.

We first use init() to get the infrastructure set-up that we will need. We are going to create a file that that will record the packages and versions used. We use snapshot() to actually document what we are using. This creates a “lockfile” that records the information.

If we want to see which packages we are using in the R Project, then we can use dependencies(). Doing this for the “starter_folder” indicates that the following packages are used: rmarkdown, bookdown, knitr, rmarkdown, bookdown, knitr, palmerpenguins, tidyverse, renv, haven, readr, and tidyverse.

We could open the lockfile file “renv.lock”, to see the exact versions if we wanted. The lockfile also documents all the other packages that were installed and where they were downloaded from. In the case of CRAN, it maintains an archive of all the versions of the packages, and so it is possible to get any version.

Someone coming to this project from outside, could then use restore() which would install the exact version of the packages that we used.

3.6.2 Code linting and styling

Being fast is valuable but it is mostly about being able to iterate fast, not necessarily having code that runs fast. Backus (1981, 26) describes how even in 1954 a programmer cost at least as much as a computer, and these days compute is usually much cheaper than a programmer. Performant code is important, but it is more important to use other people’s time efficiently.

Linting and styling is the process of checking code, mostly for stylistic issues, and re-arranging code to make it easier to read. (There is another aspect of linting, which is dealing with programming errors, such as forgetting a closing bracket, but here we focus on stylistic issues.) In general, the best efficiency gain that we can make is to make it easier for others to read our code, even if this is just ourselves returning to the code after a break. Jane Street, the US proprietary trading firm, place a very strong focus on ensuring their code is readable, as a core part of risk mitigation (Minsky 2011). While we may not all have billions of dollars under the mercurial management of our code, we all would likely prefer that our code does not produce errors.

We use lint() from lintr (Hester et al. 2022) to lint our code. For instance, consider the following R code (saved as “linting_example.R”).

    division = c(1:150, 151),
    party = sample(
      x = c("Liberal"),
      size = 151,
      replace = T

lint(filename = "linting_example.R")

The result is that the file “linting_example.R” is opened and the issues that lint() found are printed in “Markers” (Figure 3.6). It is then up to you to deal with the issues.

Figure 3.6: Linting results

Making the recommended changes results in code that is more readable, and consistent with best practice, as defined by Wickham (2021).

simulated_data <-
    division = c(1:150, 151),
    party = sample(
      x = c("Liberal"),
      size = 151,
      replace = TRUE

At first it may seem that some aspects that the linter is identifying, like trailing whitespace and only using double quotes are small and inconsequential. But they distract from being able to fix bigger issues. Further, if we are not able to get small things right, then how could anyone trust that we could get the big things right? Therefore, it is important to have dealt with all the small things that a linter identifies.

In addition to lintr, we also use a styler, styler (Müller and Walthert 2022). This will automatically adjust style issues, in contrast to the linter, which gave us a list of issues to look at. To run this we use style_file().


style_file(path = "linting_example.R")

This will automatically make changes, such as spacing and indentation, so it is important to do this regularly, rather than only once at the end of a project, so as to be able to review the changes.

3.6.3 Code review

Having dealt with all of these aspects of style, we can turn to code review. This is the process of having another person go through and critique the code. Code review is a critical part of writing code, and Irving et al. (2021, 465) describe it as “the most effective way to find bugs”. It is especially helpful, although quite daunting, when learning to code because getting feedback is a great way to improve.

When reviewing another person’s code, it is important to go out of the way to be polite and collegial. Small aspects to do with style, things like spacing and separation, should have been taken care of by a linter and styler, but if not, then make a general recommendation about that. Do not review too much code at any one time. At most a few hundred lines, which should take around an hour, because after this efficacy may wear off (Cohen, Teleki, and Brown 2006, 79). Most of your time as a reviewer in data science should be spent on aspects such as:

  1. Is there an informative README and how could it be improved?
  2. Are the file names and variable names consistent, informative, and meaningful?
  3. Do the comments allow you to understand why something is being done?
  4. Are the tests both appropriate and sufficient? Are there edge cases or corner solutions that are not considered? Similarly, are there unnecessary tests that could be removed?
  5. Are there magic numbers that could be changed to variables and explained?
  6. Is there duplicated code that could be changed?
  7. Are there any outstanding warnings that should be addressed?
  8. Are there any especially large functions or pipes that could be separated into smaller ones?
  9. Is the structure of the project appropriate and helps to understand what is happening and why?
  10. Can we change any of the code to data (Irving et al. 2021, 462)?

For instance, consider some code that looked for the names of prime ministers and presidents. When we first wrote this code we likely added the relevant names directly into the code. But as part of code review, we might instead recommend that this be changed. For instance, we might recommend creating a small dataset of relevant names, and then re-writing the code to have it look-up that dataset.

Code review ensures that the code can be understood by at least one other person. This is a critical part of building knowledge about the world. At Google, code review is not primarily about finding defects although that may happen but is instead about ensuring readability and maintainability as well as education (Sadowski et al. 2018). This is also the case at Jane Street, where they use code review to catch bugs, share institutional knowledge through ensure what is known by one programmer is also known by others, assist with training, and oblige staff to write code that can be read (Minsky 2015).

Finally, code review does not have to, and should not, be an onerous days-consuming process of reading all the code. The best code review is a quick review of just one file, focused on suggesting changes to just a handful of lines. Indeed, it may be better to have a review done by a small team of people rather than one individual.

3.6.4 Code refactoring

To refactor code means to re-write it so that the new code achieves the same outcome as the old code, it is just that the new code does it better. For instance, Chawla (2020) discuss how the code underpinning an important UK COVID model was initially written by epidemiologists, and months later clarified and cleaned-up by a team from the Royal Society, Microsoft, and GitHub. This was valuable because it provided more confidence in the model, even though both versions produced the same outputs, given the same inputs.

We typically refer to code refactoring in relation to code that someone else wrote. (Although it may be that we actually wrote the code, and it was just that it was some time ago.) When we start to refactor code, we want to make sure that the re-written code achieves the same outcomes as the original code. This means that it is important to have a suite of appropriate tests written that we can depend on. If these do not exist, then we may need to create them.

We rewrite code to make it easier for others to understand, which in turn allows more confidence in our conclusions. But before we can do that, we need to understand what the existing code is doing. One way to get started is to go through the code and add extensive comments. These comments are different to normal code chunks and are our active process of coming to terms with trying to understand what is each code chunk trying to do and how could this be improved.

Refactoring code is a great opportunity to ensure that it satisfies best practice. Trisovic et al. (2022) details some core recommendations based on examining 9,000 R scripts, and a refactor is a great opportunity to ensure that these are met. For instance, some of them include:

  1. Removing setwd() and any absolute paths, and ensuring that only relative paths, in relation to the “.Rproj” file, are used.
  2. Ensuring there is a clear order of execution. We have recommended using numbers in filenames to achieve this, but more sophisticated approaches, such as targets (Landau 2021), could be used instead.
  3. Ensuring that code runs on more than one computer.

For instance, consider the following code:



d = read_csv("cars.csv")

mtcars =
  mtcars |> 
  mutate(K_P_L = mpg / 2.352)



We could change that, starting by creating an R Project which enables us to remove setwd(), grouping all the library() calls at the top, using “<-” instead of “=”, and being consistent with variable names:


cars_data <- read_csv("cars.csv")

mpg_to_kpl_conversion_factor <- 2.352

mtcars <-
  mtcars |> 
  mutate(kpl = mpg / mpg_to_kpl_conversion_factor)

3.6.5 Parallel processing

Sometimes code is slow because the computer needs to do the same thing many times. Sometimes it is possible to take advantage of this and enable these jobs to be done at the same time using parallel processing. This will be especially useful in Chapter 12 for modelling.

To get started we can use tic() and toc() from tictoc (Izrailev 2022) to time various aspects of our code. This is useful with parallel processing, but also more generally, to help us find out where the largest delays are.

tic("First bit of code")
print("Fast code")
[1] "Fast code"
First bit of code: 0.002 sec elapsed
tic("Second bit of code")
print("Slow code")
[1] "Slow code"
Second bit of code: 3.007 sec elapsed

And so we know that there is something slowing down the code; which in this artificial case is Sys.sleep() causing a delay of 3 seconds.

We could use parallel which is part of base R to run functions in parallel. We could also use future (Bengtsson 2021) which brings additional features. To get started with future we use plan() to specify whether we want to run things sequentially (“sequential”) or in parallel (“multisession”). We then wrap whatever we want this applied to within future().

To see this in action we will create a dataset and then implement a function on a row-wise basis.


simulated_data <-
    random_draws = runif(
      n = 1000000,
      min = 0,
      max = 1000
    ) |>
    more_random_draws = runif(
      n = 1000000,
      min = 0,
      max = 1000
    ) |>


simulated_data <-
  simulated_data |>
  rowwise() |>
    which_is_smaller =


simulated_data <-
    simulated_data |>
      rowwise() |>
        which_is_smaller =

The sequential approach takes about 5 seconds, while the multisession approach takes about 0.3 seconds.

3.7 Concluding remarks

In this chapter we have covered a lot of ground and it is normal to be overwhelmed. Come back to the Quarto section as needed, and know that almost everyone is confused by Git and Github and just knows enough to get by. And while there was a lot of material in efficiency, the most important aspect of performant code is making it easier for another person to read it, even if that person is just yourself returning after a break.

3.8 Exercises


  1. (Plan) Consider the following scenario: In a certain country there are only ever four parties that could win a seat in parliament. Whichever candidate has a plurality of votes in the area associated with a given seat wins that seat. The parliament is made up of 175 total seats. An analyst is interested in the number of votes for each party by seat. Please sketch what that dataset could look like, and then sketch a graph that you could build to show all observations.
  2. (Simulate I) Please further consider the scenario described, and decide which of the following could be used to simulate the situation (select all that apply)?
    1. tibble(seat = rep(1:175, each = 4), party = rep(x = 1:4, times = 175), votes = runif(n = 175 * 4, min = 0, max = 1000) |> floor())
    2. tibble(seat = rep(1:175, each = 4), party = sample(x = 1:4, size = 175, replace = TRUE), votes = runif(n = 175 * 4, min = 0, max = 1000) |> floor())
    3. tibble(seat = rep(1:175, each = 4), party = rep(x = 1:4, times = 175), votes = sample(x = 1:1000, size = 175 * 4))
    4. tibble(seat = rep(1:175, each = 4), party = sample(x = 1:4, size = 175, replace = TRUE), votes = sample(x = 1:1000, size = 175 * 4))
  3. (Simulate II) Please write three unit tests based on this simulation.
  4. (Acquire) Please identify one possible source of actual data about voting in a country of interest to you.
  5. (Explore) Assume that tidyverse is loaded and the dataset “election_results” has the columns: “seat”, “party”, and “votes”. Which of the following would result a count of the number of seats won by each party (pick one)?
    1. election_results |> group_by(seat) |> slice_max(votes, n = 1) |> ungroup() |> count(party)
    2. election_results |> group_by(seat) |> slice_max(votes, n = 1) |> count(party)
    3. election_results |> group_by(party) |> slice_max(votes, n = 1) |> ungroup() |> count(party)
    4. election_results |> group_by(party) |> slice_max(votes, n = 1) |> count(seat)
  6. (Communicate) Please write two paragraphs as if you had gathered data from that source, and had built a graph. The exact details contained in the paragraphs do not have to be factual (i.e. you do not actually have to get the data nor create the graphs).


  1. According to Alexander (2019) research is reproducible if (pick one)?
    1. It is published in peer-reviewed journals.
    2. All of the materials used in the study are provided.
    3. It can be reproduced exactly without the authors providing materials.
    4. It can be reproduced exactly, given all the materials used in the study.
  2. According to the timeline of Gelman (2016), a) when did Paul Meehl identify various issues; and b) when did null hypothesis significance testing (NHST) become controversial (pick one)?
    1. 1970s-1980s; 1990s-2000.
    2. 1960s-1970s; 1980s-1990.
    3. 1970s-1980s; 1980s-1990.
    4. 1960s-1970s; 1990s-2000.
  3. Which of the following are components of the project layout recommended by Wilson et al. (2017) (select all that apply)?
    1. requirements.txt
    2. doc
    3. data
    4. LICENSE
    6. README
    7. src
    8. results
  4. Based on Alexander (2021) please write a paragraph about some of the barriers you overcame, or still face, with regard to sharing code that you wrote.
  5. According to Wickham (2021) for naming files, how would these files “00_get_data.R”, “get data.R” be classified (pick one)?
    1. bad; bad.
    2. good; bad.
    3. bad; good.
    4. good; good.
  6. Which of the following would result in bold text in Quarto (pick one)?
    1. **bold**
    2. ##bold##
    3. *bold*
    4. #bold#
  7. Which option would hide the warnings in a Quarto R chunk (pick one)?
    1. echo: false
    2. eval: false
    3. warning: false
    4. message: false
  8. Which options would run the R code chunk and display the results, but not show the code in a Quarto R chunk (pick one)?
    1. echo: false
    2. include: false
    3. eval: false
    4. warning: false
    5. message: false
  9. Why are R Projects important (select all that apply)?
    1. They help with reproducibility.
    2. They make it easier to share code.
    3. They make your workspace more organized.
    4. They ensure reproducibility.
  10. Please discuss a circumstance in which an R Project would be useful.
  11. Consider this sequence: “git pull, git status, ________, git status, git commit -m "My message", git push”. What is the missing step (pick one)?
    1. git add -A.
    2. git status.
    3. git pull.
    4. git push.
  12. Assuming the packages and datasets have been loaded, what is the mistake in this code: DoctorVisits |> filter(visits) (pick one)?
    1. visits
    2. DoctorVisits
    3. filter
    4. |>
  13. What is a reprex and why is it important to be able to make one (select all that apply)?
    1. A reproducible example that enables your error to be reproduced.
    2. A reproducible example that helps others help you.
    3. A reproducible example during the construction of which you may solve your own problem.
    4. A reproducible example that demonstrates you have actually tried to help yourself.
  14. According to Gelfand (2021), what is the key part of “If you need help getting unstuck, the first step is to create a reprex, or reproducible example. The goal of a reprex is to package your problematic code in such a way that other people can run it and feel your pain. Then, hopefully, they can provide a solution and put you out of your misery.” (pick one)?
    1. package your problematic code
    2. other people can run it and feel your pain
    3. the first step is to create a reprex
    4. they can provide a solution and put you out of your misery
  15. According to Gelfand (2021), what are the three key aspects of a reprex (select all that apply)?
    1. data
    2. only the libraries that are necessary and all the libraries that are necessary
    3. relevant code and only relevant code
  1. The following code produces an error. Please use reprex (Bryan et al. 2019) to build a reproducible example that you could use to get help with it, and submit the reprex using a GitHub Gist. You should simplify many aspects including reducing the number of packages, changing the dataset, and simplifying the filter() and mutate().

oecd_gdp <-



oecd_gdp_most_recent <-
  oecd_gdp |>
    TIME == "2021-Q3",
    SUBJECT == "TOT",
    LOCATION %in% c(
      "AUS", "CAN", "CHL", "DEU", "GBR",
      "IDN", "ESP", "NZL", "USA", "ZAF"
  ) |>
    european = if_else(
      LOCATION %in% c("DEU", "GBR", "ESP"),
      "Not european"
    hemisphere = if_else(
      LOCATION %in%
        c("CAN", "DEU", "GBR", "ESP", "USA"),
      "Northern Hemisphere",
      "Southern Hemisphere"


oecd_gdp_most_recent |>
  ggplot(mapping = aes(x = LOCATION, y = Value)) |>
  geom_bar(stat = "identity")


Code review is an important part of working as a professional (Sadowski et al. 2018). Please put together a small Quarto file that downloads a dataset using opendatatoronto, cleans it, and makes a graph. Then exchange it with someone else. Following the advice of Google (2022), please provide them with a review of their code. That should be at least two pages of single-spaced content. Submit the review as a PDF.


At about this point the Donaldson Paper from Appendix D would be appropriate.