Running Commands while Tracking Provenance
Authors
DataLad allows us to add any type of data to our existing dataset and track it. So far however, this has been a manual process where we added data and then ran datalad save to commit it to our repo. When modifying existing files, we ran datalad get to download and datalad unlock to make files modifiable, modified the files, and then ran datalad save again.
However, in most data analysis projects, we don’t manually generate and modify data. Instead, we use programs and scripts that do so. While we could tranfer this approach to running scripts, this would be rather tedious because we would always have to make sure the required files are present and unlocked and save new files after the script ran.
To make life easier, DataLad provides a run command. This command can execute any shell command (e.g running a Python script), make sure that inputs are available and keep track of outputs. It can even rerun commands or whoel pipelines to easily reproduce results.
Section 1: Datalad, Run!
Background
The basic syntax of the run command looks like this: datalad run "<command>".
Here, <command> can be any command that you could execute in a terminal, for example "python script.py".
DataLad will automatically save any new files generated by the run command and write a commit message. Note that datalad run requires the repository to be in a clean state. If there are any untracked changes, you need to run datalad save first.
Exercises
In the following exercises, we are going to practice using the datalad run command and observe the commit messages that are automatically generated. We’ll start by simply running shell commands (i.e. echo) and then attemt to execute a Python script trough datalad run. Here are the commands you need to know:
| Command | Description |
|---|---|
datalad save |
Commit all currently untracked changes |
datalad run "python script.py" |
Run the Python script script.py |
datalad run -m "run script" "python script.py" |
Run the Python script script.py and add the commit message "run script" |
git log |
View the dataset’s history stored in the git log |
git log -1 |
View the last entry in git log |
echo "Hello" > file.txt |
Write "Hello" to file.txt |
Run the cell below to download the penguins dataset, change the directory to penguins/ and print the dataset’s contents. The data contains some code/, some data/ (CSV files) and some example/ images of different penguins.
import os
# deactivate DataLad's progressbar for this notebook
os.environ['DATALAD_UI_PROGRESSBAR'] = 'none'
!datalad clone https://gin.g-node.org/obi/penguins
%cd penguins
!ls **install(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins LICENSE.txt README.md code: aggregate_culmen_data.py plot_culmen_length_vs_depth.py data: table_219.csv table_220.csv table_221.csv examples: adelie.jpg chinstrap.jpg gentoo.jpg
Example: Use datalad run to run an echo command that writes the test 'Penguins are cool' to penguins.md.
NOTE: When using quotations inside quotations, we must use different ones, e.g. double for the outer and single for the inner quotation: " '' "
!datalad run "echo 'Penguins are cool'>penguins.md"[INFO ] == Command start (output follows) ===== [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [echo 'Penguins are cool'>penguins.md] add(ok): penguins.md (file) save(ok): . (dataset)
Exercise: Check the last entry in git log to see the message generated by the run command.
Solution
!git log -1commit 6cdf7534ec56f012ec58993d26cb5ac6d6faac1d (HEAD -> master) Author: obi <ole.bialas@posteo.de> Date: Wed Dec 10 15:04:58 2025 +0100 [DATALAD RUNCMD] echo 'Penguins are cool'>penguins.md === Do not change lines below === { "chain": [], "cmd": "echo 'Penguins are cool'>penguins.md", "dsid": "3a8aacc5-85f0-4114-adee-fcfa7d21a5df", "exit": 0, "extra_inputs": [], "inputs": [], "outputs": [], "pwd": "." } ^^^ Do not change lines above ^^^
Exercise: The echo command below appends another line to penguins.md. Wrap it with datalad run and execute it. Then check the last entry in git log to see the commit message created by the run command.
!echo 'The Linux mascot is a penguin '>>README.mdSolution
!datalad run "echo 'The Linux mascot is a penguin '>README.md"
!git log -1[INFO ] == Command start (output follows) ===== [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [echo 'The Linux mascot is a penguin '>RE...] add(ok): README.md (file) save(ok): . (dataset) commit 5d9a92d00d124784d22ac5e47dedf5e8cb9d0b1e (HEAD -> master) Author: obi <ole.bialas@posteo.de> Date: Wed Dec 10 15:05:00 2025 +0100 [DATALAD RUNCMD] echo 'The Linux mascot is a penguin '>RE... === Do not change lines below === { "chain": [], "cmd": "echo 'The Linux mascot is a penguin '>README.md", "dsid": "3a8aacc5-85f0-4114-adee-fcfa7d21a5df", "exit": 0, "extra_inputs": [], "inputs": [], "outputs": [], "pwd": "." } ^^^ Do not change lines above ^^^
Exercise: Use datalad run to execute an echo command that lists all penguin species in this dataset (adelie, chinstrap, gentoo) in the file "species.txt" and show the last entry in git log.
BONUS: Add a custom commit message
Solution
!datalad run -m "listing species" "echo 'adelie chinstrap gentoo' > species.txt"[INFO ] == Command start (output follows) ===== [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [echo 'adelie chinstrap gentoo' > species...] add(ok): species.txt (file) save(ok): . (dataset)
!git log -1commit 3cb4ca897b4525445e440bbbf0ed9f29a58f355b (HEAD -> master) Author: obi <ole.bialas@posteo.de> Date: Wed Dec 10 15:05:02 2025 +0100 [DATALAD RUNCMD] listing species === Do not change lines below === { "chain": [], "cmd": "echo 'adelie chinstrap gentoo' > species.txt", "dsid": "3a8aacc5-85f0-4114-adee-fcfa7d21a5df", "exit": 0, "extra_inputs": [], "inputs": [], "outputs": [], "pwd": "." } ^^^ Do not change lines above ^^^
Exercise: Try to use datalad run to execute th python script code/aggregate_culmen_data.py, what error are you observing?
Solution
!datalad run "python code/aggregate_culmen_data.py"FileNotFoundError: [Errno 2] No such file or directory: '/home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_219.csv' [INFO ] == Command exit (modification check follows) ===== [INFO ] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git/COMMIT_EDITMSG' run(error): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py]
Section 2: Handling Inputs and Outputs
Background
In the previous exercise, we got an error because the aggregate_culmen_data.py script requires the CSV files in data/ as input but we haven’t downloaded those file contents yet. While we could simply do datalad get data and then rerun the command, there is a better way: we can give the run command the required --input and it will automatically get the content if required. We can also add the --output and the run command will automatically unlock the required files which allows us to overwrite them.
When specifiying inputs and ouputs, there is a tradeoff between verbosity and specificity. On one hand, listing every single file can be very tedious on the other hand, declaring a whole directory as input our output poses the danger of downloading or overwriting some files by accident.
Often, it is a good compromise is to use all files of a given type.
For example --input data/*.csv means that any file that ends in .csv in the /data folder will be used as input.
This is called a regular expression (regex) - you can use them to create very concise and powerful queries.
Exercises
In the following exercises we are going to learn how to use the --input and --output arguments of datalad run. Here are the commands you need to know:
| Command | Description |
|---|---|
datalad run --input "data.csv" "python script.py" |
Run script.py with input "data.csv" |
datalad run --input "data/" "python script.py" |
Run script.py with the whole "data/" folder as input |
datalad run --input "data/*.csv" "python script.py" |
Run script.py with every CSV file in "data/" as input |
datalad run --output "figure.png" "python script.py" |
Run script.py with the output "figure.png" |
datalad run --input "data.csv" --output "figure.png" "python script.py" |
Run script.py with input "data.csv" and output "figure.png" |
Exercise: Repeat the datalad run command from the previous exercise but add all CSV files in data/ as --input.
Solution
!datalad run --input "data/*.csv" "python code/aggregate_culmen_data.py"[INFO ] Making sure inputs are available (this may take some time) get(ok): data/table_219.csv (file) [from origin...] get(ok): data/table_221.csv (file) [from origin...] get(ok): data/table_220.csv (file) [from origin...] [INFO ] == Command start (output follows) ===== Searching for table files in /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins Found files /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_219.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_220.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_221.csv Writing data from 342 penguins to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/penguin_culmens.csv [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py] add(ok): results/penguin_culmens.csv (file) save(ok): . (dataset)
Exercise: Repeat the datalad command from the previous exercise - what does the error message tell you?
Solution
!datalad run --input "data" "python code/aggregate_culmen_data.py"PermissionError: [Errno 13] Permission denied: '/home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/penguin_culmens.csv' [INFO ] == Command exit (modification check follows) ===== [INFO ] The command had a non-zero exit code. If this is expected, you can save the changes with 'datalad save -d . -r -F .git/COMMIT_EDITMSG' run(error): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py]
Exercise: Repeat the run command from the previous exercise but add "results/penguin_culmens.csv" as --output.
Solution
!datalad run --input "data" --output "results/penguin_culmens.csv" "python code/aggregate_culmen_data.py"[INFO ] Making sure inputs are available (this may take some time) unlock(ok): results/penguin_culmens.csv (file) [INFO ] == Command start (output follows) ===== Searching for table files in /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins Found files /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_219.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_220.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_221.csv Writing data from 342 penguins to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/penguin_culmens.csv [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py] add(ok): results/penguin_culmens.csv (file)
Exercise: Use datalad run to execute the Python script clode/plot_culmen_length_vs_depth.py with results/penguin_culmens.csv as --input and results/culmen_length_vs_depth.png as --output. Then, execute the cell below to plot the graph created.
Solution
!datalad run --input "results/penguin_culmens.csv" --output "results/culmen_length_vs_depth.png" "python code/plot_culmen_length_vs_depth.py"[INFO ] Making sure inputs are available (this may take some time) [INFO ] == Command start (output follows) ===== Loading data ... Creating plot ... Saving figure to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/culmen_length_vs_depth.png [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/plot_culmen_length_vs_depth....] add(ok): results/culmen_length_vs_depth.png (file) save(ok): . (dataset)
from IPython.display import Image
Image("results/culmen_length_vs_depth.png", width=600)Section 3: From Single Scripts to Analysis Pipelines
Background
Another nice feature of DataLad is the ability to rerun certain commands. This allows you to quickluy rerun an analysis step after making changes to a script without having to retype the whole run command.
You can also rerun all steps --since a certain commit.
So, if your analysis consists of a series of datalad run commands, you can reproduce the entire pipeline with a single command - this is the power we gain from tracking the computational provenence of our digital objects!
To rerun commands, we have to refer to specific entries in the repo’s commit history.
There are essentially two different ways of doing that: by specifying the position relative to the current head of the repi (e.g. HEAD~2 which refers to the second most recent entry) or by specifying the (first few letters of) the commit hash (e.g. s54h69f).
Note that the commit hashes in git log are unique for every repo - you can’t simply copy the lines from this notebook!
Exercises
In the following exercises, we are going to use datalad rerun to repeat commands from the commit history. We are also going to rerun multiple commands at once using the --since flag. Here are the commands you need to know:
| Command | Description |
|---|---|
datalad rerun a268d8ca22b6 |
Rerun the command from the git log with the checksum starting with a268d8ca22b6e87959 |
datalad rerun HEAD~1 |
Rerun the command from the most recent entry in git log |
datalad rerun HEAD~2 |
Rerun the command from the second most recent entry in git log |
datalad rerun --since a268d8ca22b6 |
Rerun ALL commands --since the one with the checksum starting with a268d8ca22b6e87959 |
git log -2 |
View the last two entries in git log |
git log --oneline |
Get a compact view of the git log |
Exercise: View the last entry of the git log to see the message created by the last run command.
Solution
!git log -1commit a2028700361d8dbe70a8a368d00cc0096eecd478 (HEAD -> master) Author: obi <ole.bialas@posteo.de> Date: Wed Dec 10 15:05:15 2025 +0100 [DATALAD RUNCMD] python code/plot_culmen_length_vs_depth.... === Do not change lines below === { "chain": [], "cmd": "python code/plot_culmen_length_vs_depth.py", "dsid": "3a8aacc5-85f0-4114-adee-fcfa7d21a5df", "exit": 0, "extra_inputs": [], "inputs": [ "results/penguin_culmens.csv" ], "outputs": [ "results/culmen_length_vs_depth.png" ], "pwd": "." } ^^^ Do not change lines above ^^^
Exercise: Rerun the last datalad run command.
Solution
!datalad rerun HEAD~1[INFO ] run commit 39ce367; (python code/aggre...) [INFO ] Making sure inputs are available (this may take some time) unlock(ok): results/penguin_culmens.csv (file) [INFO ] == Command start (output follows) ===== Searching for table files in /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins Found files /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_219.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_220.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_221.csv Writing data from 342 penguins to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/penguin_culmens.csv [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py] add(ok): results/penguin_culmens.csv (file) action summary: add (ok: 1) get (notneeded: 3) run (ok: 1) save (notneeded: 1) unlock (ok: 1)
Exercise: Check the git log. Did rerunning the command create a new entry?
Solution
!git log --oneline -2a202870 (HEAD -> master) [DATALAD RUNCMD] python code/plot_culmen_length_vs_depth.... 39ce367 [DATALAD RUNCMD] python code/aggregate_culmen_data.py
Exercise: Open code/plot_culmen_length_vs_depth.py and change the dpi in fig.savefig() to 150. Then, save the file and use datalad save to track the changes. Now rerun the same run command and check the git log. Did the rerun create a commit message this time?
Solution
!datalad sa
!datalad rerun HEAD~1
!git log --oneline -3datalad: Unknown command 'sa'. See 'datalad --help'.
Did you mean any of these?
save
[INFO ] run commit 39ce367; (python code/aggre...)
[INFO ] Making sure inputs are available (this may take some time)
unlock(ok): results/penguin_culmens.csv (file)
[INFO ] == Command start (output follows) =====
Searching for table files in /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins
Found files
/home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_219.csv
/home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_220.csv
/home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_221.csv
Writing data from 342 penguins to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/penguin_culmens.csv
[INFO ] == Command exit (modification check follows) =====
run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py]
add(ok): results/penguin_culmens.csv (file)
action summary:
add (ok: 1)
get (notneeded: 3)
run (ok: 1)
save (notneeded: 1)
unlock (ok: 1)
a202870 (HEAD -> master) [DATALAD RUNCMD] python code/plot_culmen_length_vs_depth....
39ce367 [DATALAD RUNCMD] python code/aggregate_culmen_data.py
3cb4ca8 [DATALAD RUNCMD] listing species
Exercise: Find the commit hash of the entry just before the first run command and rerun everything --since that commit (i.e. the full “analysis” pipeline)
Solution
!git log --onelinea202870 (HEAD -> master) [DATALAD RUNCMD] python code/plot_culmen_length_vs_depth.... 39ce367 [DATALAD RUNCMD] python code/aggregate_culmen_data.py 3cb4ca8 [DATALAD RUNCMD] listing species 5d9a92d [DATALAD RUNCMD] echo 'The Linux mascot is a penguin '>RE... 6cdf753 [DATALAD RUNCMD] echo 'Penguins are cool'>penguins.md 0e8aebb (origin/master, origin/HEAD) update URL e30123b add content 4c2b9dc [DATALAD] new dataset
!datalad rerun --since HEAD~6[INFO ] run commit 6cdf753; (echo 'Penguins ar...) unlock(ok): penguins.md (file) [INFO ] == Command start (output follows) ===== [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [echo 'Penguins are cool'>penguins.md] add(ok): penguins.md (file) [INFO ] run commit 5d9a92d; (echo 'The Linux m...) [INFO ] == Command start (output follows) ===== [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [echo 'The Linux mascot is a penguin '>RE...] [INFO ] run commit 3cb4ca8; (listing species) unlock(ok): species.txt (file) [INFO ] == Command start (output follows) ===== [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [echo 'adelie chinstrap gentoo' > species...] add(ok): species.txt (file) [INFO ] run commit 39ce367; (python code/aggre...) [INFO ] Making sure inputs are available (this may take some time) unlock(ok): results/penguin_culmens.csv (file) [INFO ] == Command start (output follows) ===== Searching for table files in /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins Found files /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_219.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_220.csv /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/data/table_221.csv Writing data from 342 penguins to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/penguin_culmens.csv [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/aggregate_culmen_data.py] add(ok): results/penguin_culmens.csv (file) [INFO ] run commit a202870; (python code/plot_...) [INFO ] Making sure inputs are available (this may take some time) unlock(ok): results/culmen_length_vs_depth.png (file) [INFO ] == Command start (output follows) ===== Loading data ... Creating plot ... Saving figure to /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins/results/culmen_length_vs_depth.png [INFO ] == Command exit (modification check follows) ===== run(ok): /home/olebi/projects/new-learning-platform/notebooks/datalad/02_data_sharing_and_provenance_tracking/02_running_commands_while_tracking_provenance/penguins (dataset) [python code/plot_culmen_length_vs_depth....] add(ok): results/culmen_length_vs_depth.png (file) action summary: add (ok: 4) get (notneeded: 4) run (ok: 5) save (notneeded: 5) unlock (notneeded: 1, ok: 4)