Automation in and of Jupyter Notebooks

Authors
Dr. Sangeetha Nandakumar | Dr. Nicholas Del Grosso

Setup

Import Libraries

import papermill as pm

Download Data

import os
import owncloud

# Ensure the 'command_line' directory exists
if not os.path.exists('command_line'):
    print('Creating directory for command_line')
    os.mkdir('command_line')

# Download text_config.txt
if not os.path.exists('command_line/text_config.txt'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/yDiGZT44SXLvK5r')
    oc.get_file('/', 'command_line/text_config.txt')

# Download python_config.py
if not os.path.exists('command_line/python_config.py'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/apw9RMXjgfhQaK5')
    oc.get_file('/', 'command_line/python_config.py')

# Download notebook_config.ipynb
if not os.path.exists('command_line/notebook_config.ipynb'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/lwVMGbzKQXFuIax')
    oc.get_file('/', 'command_line/notebook_config.ipynb')

# Ensure the 'data' and 'parameterization' directories exist
if not os.path.exists('data'):
    print('Creating directory for data')
    os.mkdir('data')

if not os.path.exists('parameterization'):
    print('Creating directory for parameterization')
    os.mkdir('parameterization')

# Download 2016-12-14_Cori.csv
if not os.path.exists('data/2016-12-14_Cori.csv'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/nih6mIiDSLOlPHU')
    oc.get_file('/', 'data/2016-12-14_Cori.csv')

# Download 01_notebook_brain_area.ipynb
if not os.path.exists('parameterization/01_notebook_brain_area.ipynb'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/dkPOipzGNjkBiXQ')
    oc.get_file('/', 'parameterization/01_notebook_brain_area.ipynb')

# Download 02_notebook_fixed_response.ipynb
if not os.path.exists('parameterization/02_notebook_fixed_response.ipynb'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/WReS5HIxAK8cws4')
    oc.get_file('/', 'parameterization/02_notebook_fixed_response.ipynb')

# Download 03_notebook_fixed_feedback.ipynb
if not os.path.exists('parameterization/03_notebook_fixed_feedback.ipynb'):
    oc = owncloud.Client.from_public_link('https://uni-bonn.sciebo.de/s/QxcX90gL9B7paar')
    oc.get_file('/', 'parameterization/03_notebook_fixed_feedback.ipynb')
Creating directory for command_line
Creating directory for parameterization

In this notebook we learn how to use command-line tools automate the execution and management of Jupyter notebooks. We start by learning how to run command-line commands, like managing files or installing software, directly from the notebook. Then, we explore how to run entire notebooks from the command line, which helps when we need to automate tasks. We then see how to pass in parameters to a template notebook to generate automated analysis reports. We also see how to do batch processing of notebooks using Papermill.

Section 1: Running Command-Line Commands in Jupyter

A command line is a text-based interface that allows users to interact with their computer’s operating system by typing commands, rather than using graphical interfaces. In this interface, users can navigate directories, manage files, run programs, and perform a wide range of tasks by typing specific commands. Popular command-line environments include Bash (common in Linux and macOS) and the Windows Command Prompt or PowerShell.

As researchers we may need to use command-line for file management (move, rename, delete, or organize datasets), automate repeating tasks that may involve external tools, install software etc.

Incorporating command-line commands into our analysis notebooks allows us to integrate external tools, automate repeating tasks, and manage data all within the same environment.

Exercises

Example: Install pandas

# !pip install pandas

Exercise: Install numpy

Solution
# !pip install numpy

Exercise: Install seaborn

Solution
# !pip install seaborn

You can use any option that comes along with the command-line command

Example: Upgrade matplotlib

# !pip install --upgrade matplotlib

Exercise: Upgrade seaborn

Solution
# !pip install --upgrade seaborn

Exercise: Upgrade nbformat

Solution
# !pip install --upgrade nbformat

Let’s practice converting scripts to notebooks

Example: Convert script.py (run below code to generate the file) to notebook. How does the resulting notebook look?

%%writefile script.py
num_mouse = 10
num_contrast_left = 4
num_contrast_right = 4
!jupytext --to notebook script.py

Exercise: Convert script.py (run below code to generate the file) to notebook. How does the resulting notebook look?

Solution
%%writefile script.py
num_mouse = 10
num_contrast_left = 4
num_contrast_right = 4

print(num_mouse)
!jupytext --to notebook script.py

Exercise: Convert script.py (run below code to generate the file) to notebook. How does the resulting notebook look?

Solution
%%writefile script.py
num_mouse = 10
num_contrast_left = 4
num_contrast_right = 4

num_mouse
!jupytext --to notebook script.py

Exercise: Convert script.py (run below code to generate the file) to notebook. How does the resulting notebook look?

Solution
# %% [markdown]
# Title

# %%
a = 10
!jupytext --to notebook script.py

Exercise: Create script.py with the a title “Data Analysis” and a=10, b=100. Convert it to notebook. How does the resulting notebook look?

Solution
# %% [markdown]
# Data Analysis

# %%
a=10
b=10
!jupytext --to notebook script.py

Example: Create a new directory called data_1

!mkdir data_1

Exercise: Create a new directory data_2

Solution
!mkdir data_2

Exercise: Create a new directory data_1/data_1_sub (data_1\data_1_sub for windows machines)

Solution
!mkdir data_1\data_1_sub

We can run Linux command-line commands within a cell using %%bash

Example: Copy magic_commands/hello.py to data_1 directory

%%bash
cp command_line/python_config.py data_1/python_config.py

Exercise: Copy magic_commands/text_config.txt to data_1

Solution
%%bash
cp command_line/text_config.txt data_1/text_config.txt

Exercise: Copy magic_commands/notebook_config.ipynb to data_1/data_1_sub with a name nb_config.ipynb

Solution
%%bash
cp command_line/notebook_config.ipynb data_1/data_1_sub/nb_config.ipynb
%%bash
rm data_1/text_config.txt

Exercise: Delete data_1/python_config.txt (Only file)

Solution
%%bash
rm data_1/python_config.py

Exercise: Delete data_2 directory

Solution
%%bash
rm -r data_2

Exercise: Delete data_1 including sub-directories

Solution
%%bash
rm -r data_1

Section 2: Executing Notebooks from Command Line

Running a notebook from command-line can be useful to automate execution of Jupyter notebook as part of a workflow or pipeline. It can help us integrate it with task scheduling tools to perform routine tasks without manually opening and running the notebook. Another use would be when dealing with multiple notebooks, running from command-line allows for batch processing enabling us to execute several notebooks sequentially without manually interacting with each one.

Here we will look into a tool called papermill that can execute notebooks from command-line. For this, we use three notebooks

  1. parameterization/01_notebook_brain_area.ipynb: Filters 2016-12-14_Cori.csv to a selected brain area to make a processed csv file. By default, it will be VISp
  2. parameterization/02_notebook_fixed_response.ipynb: Based on the selected response type, it examines how feedback affects LFP signals in the brain area using the processed csv.
  3. parameterization/03_notebook_fixed_feedback.ipynb: Based on the selected feedback type, it examines how mice’s response affects LFP signals in the brain area using the processed csv.

Notebooks 2 and 3 are not dependent on each other. Both use the output from notebook 1 for their analysis.

Exercises

Example: Execute notebook1 as output.ipynb and examine it. Was any other file generated from this?

!papermill parameterization/01_notebook_brain_area.ipynb output.ipynb

Exercise: Execute notebook 2 as output.ipynb and examine the output.

Solution
!papermill parameterization/02_notebook_fixed_response.ipynb output.ipynb

Exercise: Execute notebook 3 as output.ipynb and examine the output.

Solution
!papermill parameterization/03_notebook_fixed_feedback.ipynb output.ipynb

Delete output_data/processed_brain_area.csv file.

Exercise: Execute notebook 3 as output.ipynb and examine it. What do you see?

Solution
!papermill parameterization/03_notebook_fixed_feedback.ipynb output.ipynb

It gives an error in the output of the cell. In data_analysis/output.ipynb, you will see a huge error in red on top of the notebook and another red text before the cell where it encountered an error.

Let’s see how to execute them sequentially

Example: Execute notebooks 1 and 2 one after the other.

!papermill parameterization/01_notebook_brain_area.ipynb output_1.ipynb
!papermill parameterization/02_notebook_fixed_response.ipynb output_2.ipynb

Exercise: Execute notebooks 1 and 3 one after the other.

Solution
!papermill parameterization/01_notebook_brain_area.ipynb output_1.ipynb
!papermill parameterization/03_notebook_fixed_feedback.ipynb output_3.ipynb

Exercise: Execute all the three notebooks one after the other

Solution
!papermill parameterization/01_notebook_brain_area.ipynb output_1.ipynb
!papermill parameterization/02_notebook_fixed_response.ipynb output_2.ipynb
!papermill parameterization/03_notebook_fixed_feedback.ipynb output_3.ipynb

Section 3: Passing in Parameters To Notebooks With Papermill

Papermill helps with parameterizing Jupyter notebooks by allowing us to inject new inputs (parameters) into a notebook before running it. Parameters have placeholders in the template notebook, and when we run Papermill, it fills those placeholders with the actual values we provide. Papermill then executes the entire notebook with the new inputs, saving the results in a new output notebook. This makes it easy to reuse the same notebook as a template for different data or settings essentially creating an analysis report for different parameter.

For this example we will use two same notebooks as the previous section and get some practice with passing parameters to template notebooks.

Setting Parameters

To make papermill know that a cell contains parameters

  1. Put all parameters in a single cell before any other cell that uses them
  2. Click on the cell and then the gear icon next to the notebook
  3. Type parameters within Cell Tags

Do this for all the three notebooks

With papermill, we can pass different values for any variable inside the cell tagged as parameters by adding a -p for each parameter.

In this section, let us use the three notebooks as templates and make reports for different brain areas, responses, and feedbacks to learn how papermill works. Same technique can be applied to complex problems as well.

Exercises

Example: Run notebook 1 specifying that the output should be called processed_VISp.csv

!papermill parameterization/01_notebook_brain_area.ipynb -p output_csv output_data/processed_visp.csv 01_notebook_visp.ipynb

Exercise: Run notebook 2 specifying that the input csv is now called output_data/processed_VISp.csv

Solution
!papermill parameterization/02_notebook_fixed_response.ipynb -p input_csv output_data/processed_VISp.csv 02_notebook_fixed_response_visp.ipynb

Exercise: Run notebook 3 specifying that the input csv is now called output_data/processed_VISp.csv

Solution
!papermill parameterization/03_notebook_fixed_feedback.ipynb -p input_csv output_data/processed_VISp.csv 03_notebook_fixed_feedback_visp.ipynb

Exercise: Run notebook 3 specifying that the input csv is now called output_data/processed_ACA.csv. Examine the output notebook. What information do you get?

Solution
!papermill parameterization/03_notebook_fixed_feedback.ipynb -p input_csv output_data/processed_ACA.csv 03_notebook_fixed_feedback_aca.ipynb

Example: Run notebook 1 specifying that the brain area is ACA and output should be called processed_ACA.csv

!papermill parameterization/01_notebook_brain_area.ipynb -p brain_area ACA -p output_csv output_data/processed_ACA.csv 01_notebook_aca.ipynb

Exercise: Run notebook 2 specifying that the input file is output_data/processed_ACA.csv and response_type as 0

Solution
!papermill parameterization/02_notebook_fixed_response.ipynb -p input_csv output_data/processed_ACA.csv -p response_type 0 02_notebook_response_0_aca.ipynb

Exercise: Run notebook 2 specifying that the input file is output_data/processed_ACA.csv and response_type as -1. Compare with previous report (output notebook)

Solution
!papermill parameterization/02_notebook_fixed_response.ipynb -p input_csv output_data/processed_ACA.csv -p response_type -1 02_notebook_response_min_1_aca.ipynb

Exercise: Run all three notebooks one after the other for brain area SUB, response type 0, and feedback type -1.

Solution
!papermill parameterization/01_notebook_brain_area.ipynb -p brain_area SUB -p output_csv output_data/processed_SUB.csv 01_notebook_sub.ipynb
!papermill parameterization/02_notebook_fixed_response.ipynb -p input_csv output_data/processed_SUB.csv -p response_type 0 02_notebook_response_0_sub.ipynb
!papermill parameterization/03_notebook_fixed_feedback.ipynb -p input_csv output_data/processed_SUB.csv -p feedback_type -1 03_notebook_fixed_feedback_sub.ipynb

Section 4: Batch Processing Notebooks With Papermill Python API

In a Jupyter notebook, you can use a for loop to automate the execution of multiple notebooks with different input parameters using papermill. This approach allows for dynamic notebook execution by iterating over a list of notebooks and their corresponding parameter sets, enabling each notebook to be run with customized inputs. During each iteration of the loop, papermill executes the notebook with the specified parameters and generates a new output notebook, which is can be saved with a unique filename.

Let’s get some practice batch processing in Python using for-loops

Exercises

import pandas as pd
df = pd.read_csv('data/2016-12-14_Cori.csv')
df.brain_area_lfp.unique()
array(['ACA', 'LS', 'MOs', 'CA3', 'DG', 'SUB', 'VISp'], dtype=object)

Example: Run notebook 1 for brain area LS and CA3

template_noteboook = 'parameterization/01_notebook_brain_area.ipynb'
params = [dict(output_csv='output_data/01_notebook_LS.csv', brain_area='LS'), dict(output_csv='output_data/01_notebook_CA3.csv', brain_area='CA3')]
output_nb_names = ['01_notebook_LS.ipynb', '01_notebook_CA3.ipynb']

for param, output_nb_name in zip(params, output_nb_names):
    pm.execute_notebook(
        template_noteboook,
        output_nb_name,
        parameters=param
    )

Exercise: Run notebook 2 for brain area LS and response types of 1, 0, and -1.

Solution
template_noteboook = 'parameterization/02_notebook_fixed_response.ipynb'
params = [dict(input_csv='output_data/01_notebook_LS.csv', response_type=1), dict(input_csv='output_data/01_notebook_LS.csv', response_type=0), dict(input_csv='output_data/01_notebook_LS.csv', response_type=-1)]
output_nb_names = ['02_notebook_LS_response_left.ipynb', '02_notebook_LS_response_zero.ipynb', '02_notebook_LS_response_right.ipynb']

for param, output_nb_name in zip(params, output_nb_names):
    pm.execute_notebook(
        template_noteboook,
        output_nb_name,
        parameters=param
    )

Exercise: Run notebook 3 for brain area LS and response types 1 and -1

Solution
template_noteboook = 'parameterization/03_notebook_fixed_feedback.ipynb'
params = [dict(input_csv='output_data/01_notebook_LS.csv', feedback_type=1), dict(input_csv='output_data/01_notebook_LS.csv', feedback_type=-1)]
output_nb_names = ['03_notebook_reward.ipynb', '03_notebook_punish.ipynb']

for param, output_nb_name in zip(params, output_nb_names):
    pm.execute_notebook(
        template_noteboook,
        output_nb_name,
        parameters=param
    )

We can automate naming of outputs by making use of f-strings in Python to help us follow a structured naming of files. Names will be automatically set inside the for-loop and not in the params dictionary.

Example: Run notebook 1 for brain area LS, CA3

template_noteboook = 'parameterization/01_notebook_brain_area.ipynb'
params = [dict(brain_area='LS'), dict(brain_area='CA3')]

for param in params:
    param['output_csv'] = 'output_data/' + f'01_notebook_{param['brain_area']}.csv'
    output_nb_name = f'01_notebook_{param['brain_area']}.ipynb'
    
    pm.execute_notebook(
        template_noteboook,
        output_nb_name,
        parameters=param
    )

Exercise: Run notebook 1 for brain area LS, CA3, and SUB

Solution
template_noteboook = 'parameterization/01_notebook_brain_area.ipynb'
params = [dict(brain_area='LS'), dict(brain_area='CA3'), dict(brain_area='SUB')]

for param in params:
    param['output_csv'] = 'output_data/' + f'01_notebook_{param['brain_area']}.csv'
    output_nb_name = f'01_notebook_{param['brain_area']}.ipynb'
    
    pm.execute_notebook(
        template_noteboook,
        output_nb_name,
        parameters=param
    )

This is especially helpful when we have to run the template notebook for large number of values for a given parameter

Exercise: Run notebook 1 for brain area LS, CA3, SUB, VISp, MOs, DG, ACA

Solution
template_noteboook = 'parameterization/01_notebook_brain_area.ipynb'
params = [dict(brain_area='LS'), dict(brain_area='CA3'), dict(brain_area='SUB'), dict(brain_area='VISp'), dict(brain_area='MOs'), dict(brain_area='DG'), dict(brain_area='ACA')]

for param in params:
    param['output_csv'] = 'output_data/' + f'01_notebook_{param['brain_area']}.csv'
    output_nb_name = f'01_notebook_{param['brain_area']}.ipynb'
    
    pm.execute_notebook(
        template_noteboook,
        output_nb_name,
        parameters=param
    )

(DEMO) We can make nested for-loops to run all the three notebooks for every parameter combination. This code can be in another notebook that can be executed whenever we have to re-run an entire analysis workflow without having to go back and change parameters in the template notebook.

template_noteboook_1 = 'parameterization/01_notebook_brain_area.ipynb'
template_noteboook_2 = 'parameterization/02_notebook_fixed_response.ipynb'
template_noteboook_3 = 'parameterization/03_notebook_fixed_feedback.ipynb'

params_brain_area = [dict(brain_area='LS'), dict(brain_area='CA3'), dict(brain_area='SUB'), dict(brain_area='VISp'), dict(brain_area='MOs'), dict(brain_area='DG'), dict(brain_area='ACA')]
params_response = [dict(response_type=1), dict(response_type=0), dict(response_type=-1)]
params_feedback = [dict(feedback_type=1), dict(feedback_type=-1)]

for param_brain_area in params_brain_area:
    param_brain_area['output_csv'] = 'output_data/' + f'01_notebook_{param['brain_area']}.csv'
    output_nb_name = f'01_notebook_{param['brain_area']}.ipynb'
    
    pm.execute_notebook(
        template_noteboook_1,
        output_nb_name,
        parameters=param_brain_area
    )

    for param_response in params_response:
        output_nb_name = f'02_notebook_{param_brain_area['brain_area']}_response_{param_response['response_type']}.ipynb'
        param_response['input_csv'] = 'output_data/' + f'01_notebook_{param['brain_area']}.csv'

        pm.execute_notebook(
            template_noteboook_2,
            output_nb_name,
            parameters=param_response
        )

    for param_feedback in params_feedback:
        output_nb_name = f'02_notebook_{param_brain_area['brain_area']}_feedback_{param_feedback['feedback_type']}.ipynb'
        param_feedback['input_csv'] = 'output_data/' + f'01_notebook_{param['brain_area']}.csv'

        pm.execute_notebook(
            template_noteboook_3,
            output_nb_name,
            parameters=param_feedback
        )