BIDS Containers

Author/Maintainer: Dianne Patterson Ph.D. dkp @ arizona.edu
Date Created: 2019_07_30
Date Updated: 2023_03_07
Tags: BIDS, containers
OS: UNIX (e.g., Mac or Linux)

Introduction

This page provides detailed instructions, examples and scripts for using Singularity (a.k.a. apptainer) containers that I provide on the HPC for the neuroimaging community. Some discussion of Docker containers is also provided. As always, check Date Updated above to ensure you are not getting stale information. My preference is to point to official documentation when I can find it and think it is useful. If you have found documentation you believe I should consider linking to, please let me know (dkp @ arizona.edu).

Note

As of Oct 31, 2022 The university of Arizona HPC has switched from Singularity to Apptainer. See Containers

BIDS Containers are BIG and Resource Intensive

  • The containers we use for neuroimage processing tend to be large and resource intensive. That brings its own set of problems.

  • For Docker on the mac, you need to check Preferences ➜ Disk to allow Docker plenty of room to store the containers it builds (several containers can easily require 50-100 GB of space). In addition, check Preferences ➜ Advanced to make sure you give Docker plenty of resources to run tools like Freesurfer. For example, I allocate 8 CPUs and 32 GB of memory to Docker).

  • In theory, it is possible to move your Docker storage to an external disk. In my experience this is flaky and results in slowness and lots of crashing (early 2019). Of course, it could be better in the future.

  • The implications of container size for Singularity (or Apptainer) containers are explored here.

  • Below I provide information about neuroimaging Singularity containers I maintain on the U of A HPC. You are free to use these if you have an HPC account. In addition to providing containers, I also provide example scripts for running those containers. You’ll have to copy and modify those scripts for your own directories and account name. A detailed walk-through is provided here BET.

  • Several common BIDS containers are available in /contrib/singularity/shared/neuroimaging. The example scripts that call these containers all use an environment variable ${SIF} to reference the path. In general, a generic name is provided for each container and linked to full version name. e.g., fmriprep.sif points to fmriprep_v21.0.0.sif. You may wish to use this generic name when first experimenting with running a container. That is the name that will appear in the batch scripts I provide as templates. However, once you want to run a container on all your subjects, you should use the fully specified version name for the container. This will guarantee you consistency, even if other versions of the container are added to the directory.  You are free to use these instead of filling up your own space with duplicate containers, however, you need to define the environment variable SIF in your .bash_profile like this:

    export SIF=/contrib/singularity/shared/neuroimaging
    
  • In this case, singularity run can be called like from the parent directory of

    singularity run --cleanenv --bind Nifti:/data --bind ${SIF}/bet.sif  /data /data/derivatives participant --participant_label ${Subject}
    
  • Alternatively, you can write out the whole path for the singularity run call:

    singularity run --cleanenv --bind bids_data:/data /contrib/singularity/shared/neuroimaging/bet.sif  /data /data/derivatives participant --participant_label ${Subject}
    

Create Subdirectories under Derivatives for your Outputs

In general, you want to put results of running these containers into a derivatives directory. fMRIPrep creates subdirectories under derivatives, which seems like a good idea as it keeps the outputs of different containers separated and it does not annoy the bids validator. At the time of this writing, 02/27/2020, the standards for naming in the derivatives directory have not been finalized.

Note

It is a good idea to create the derivatives directory before you run your Docker or Singularity container. Sometimes the containerized app looks for a directory and fails if it does not find it.

BET

This is a small neuroimaging container (under 500 MB), which runs quickly. This walk-through will provide you with experience working at the unix commandline, transferring data, running interactive and batch mode processes on the HPC, building a Singularity container from a Docker container, and running a bids-compliant workflow with Singularity on the HPC. This assumes you have an HPC account and are familiar with the material on the HPC page page. You will also need some experience with the Unix comand line.

Login to OOD and Get Ready

  • Open a File Explorer window in your home directory (Files ➜ Home Directory).

  • From the File Explorer window, select Open in Terminal (at the top, 2nd from the left) and choose Ocelote.

Try Data Transfer

Download sub-219_bids.zip. The dataset is too big to upload with the OOD upload button. Use scp or globus instead, e.g. (note you will need to use your username instead of dkp). You can also use graphical scp/sftp programs like WinSCP or Cyberduck. Data transfers are handled by filexfer.hpc.arizona.edu. This may be called the hostname or server for your transfer program.:

scp -v sub-219_bids.zip dkp@filexfer.hpc.arizona.edu:~

Unzip the dataset: It is called MRIS.

Build the BET Singularity Container

Start an interactive session (use your own account name):

srun --nodes=1 --ntasks=1 --time=01:00:00 --job-name=interact --account=dkp --partition=standard --pty bash -i

Once you have an interactive prompt, you can build the Singularity container. First, you have to load the Singularity module so the HPC will understand Singularity commands you issue. Second, you can build the container by pulling it from dockerhub:

singularity build bet.sif docker://bids/example

Note

If you have any trouble with this step, you can use /contrib/singularity/shared/neuroimaging/bet.sif instead. You are welcome to copy it, but you can also use it without copying it.

Run BET with Singularity

Warning

You should be in interactive mode.

Run Singularity on the dataset (you must be in the MRIS directory):

singularity run --cleanenv --bind ${PWD}/Nifti:/data:ro --bind ${PWD}/derivatives:/outputs ./bet.sif  /data /outputs participant --participant_label 219

If you do not have the bet.sif container in your home directory, you can use the one in /contrib/singularity/shared/neuroimaging:

singularity run --cleanenv --bind ${PWD}/Nifti:/data:ro --bind ${PWD}/derivatives:/outputs/contrib/singularity/shared/neuroimaging/bet.sif  /data /outputs participant --participant_label 219

If Singularity runs properly it creates sub-219_ses-itbs_brain.nii.gz in the Nifti/derivatives directory. Confirm that it worked:

cd Nifti/derivatives; ls

Provided that worked, we can run the group-level BIDS command:

singularity run --cleanenv --bind ${PWD}/Nifti:/data:ro --bind ${PWD}/derivatives:/outputs ${SIF}/bet.sif  /data /outputs group

That should create avg_brain_size.txt in the derivatives directory.

Understanding the Singularity Command

Singularity takes a number of options. So far you’ve seen build and run. build creates the sif file. run uses that file to perform some processing.

  • --cleanenv prevents conflicts between libraries outside the container and libraries inside the container; although sometimes the container runs fine without --cleanenv, it is generally a good idea to include it.

  • --bind Singularity (like Docker) has a concept of what is inside the container and what is outside the container. The BIDS standard requires that certain directories exist on the inside of every BIDS container, e.g., /data (or sometimes /bids_dataset) and /outputs. You must bind your preferred directory outside the container to these internal directories. Order is important (outside:inside). Here are our two examples: from above the Nifti directory: --bind Nifti:/data or from inside the Nifti directory: --bind ${PWD}:/data

  • What container are we running? You must provide the unix path to the container. There are three examples here:

    • ./bet.sif assumes that bet.sif is in the same directory where you are running the singularity command.

    • /contrib/singularity/shared/neuroimaging/bet.sif provides the path to bet.sif in /contrib/singularity/shared/neuroimaging.

    • ../bet.sif says the container is up one directory level from where you are running the singularity command.

  • BIDS requires that we list input and output directories. This is relative to the bind statement that defines the directory on the outside corresponding to /data on the inside. Thus /data/derivatives will correctly find our derivatives directory outside the container. This is the same for Docker containers.

  • Finally, further BIDS options are specified just like they would be for the corresponding Docker runs.

  • If your directory does not meet the bids specification, add the flag --skip_bids_validator to the command to relax the stringent requirtements for the directory.

Running a Batch Job Script for a BIDS Neuroimaging Singularity Container

Batch jobs, like interactive mode, use your allocated time. Copy runbet.sh from /groups/dkp/neuroimaging/scripts:

cp /groups/dkp/neuroimaging/scripts/runbet.sh .

The script consists of two sections. The first section specifies the resources you need to run the script. All the scripts I make available to you have pretty good estimates of the time and resources required.

The second part of the script is a standard bash script. It defines a variable Subject and calls Singularity.

Open the script with the editor, because you will need to modify several values:

  • Modify the account, e.g., change --account=dkp to your own account name, e.g., --account=akielar.

  • Modify #SBATCH --mail-user=dkp@arizona.edu to use your email address instead of mine.

  • Change MRIS to point to your directory, instead of /groups/dkp/BIDS, e.g.,:

    export MRIS=/groups/akielar/test
    
  • The singularity run statement looks for bet.sif on the path specified by the environment variable ${SIF}: export SIF=/contrib/singularity/shared/singularity-images. This should work as written.

  • Save the script.

  • You will pass the subject variable to sbatch using --export sub=219.

  • The derivatives directory will be created if it does not exist.

Run the script (you must specify the path to runbet.sh unless it is already in your unix path). In this example, my Scripts directory is already in my path:

sbatch --export sub=219 runbet.sh

Look in the active jobs window to see if your job is queued. It runs very quickly so it may complete before you have a chance to see it. When it finishes, it creates a text log (e.g., slurm-2374948.out) describing what it did. The job submission system should also send you an email from root telling you the job is complete. See below. Exit status=0 means the job completed correctly. The job used 11 seconds of walltime and 5 seconds of cpu time:

 1: This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. (code: 1 - NOT_INCLUDED)
          /.bidsignore
                  Evidence: .bidsignore
          /sub-219/ses-itbs/anat/sub-219_ses-itbs_acq-tse_T2w1.json
                  Evidence: sub-219_ses-itbs_acq-tse_T2w1.json
          /sub-219/ses-itbs/anat/sub-219_ses-itbs_acq-tse_T2w1.nii.gz
                  Evidence: sub-219_ses-itbs_acq-tse_T2w1.nii.gz
          /sub-219/ses-itbs/anat/sub-219_ses-itbs_acq-tse_T2w2.json
                  Evidence: sub-219_ses-itbs_acq-tse_T2w2.json
          /sub-219/ses-itbs/anat/sub-219_ses-itbs_acq-tse_T2w2.nii.gz
                  Evidence: sub-219_ses-itbs_acq-tse_T2w2.nii.gz
          /sub-219/ses-itbs/func/sub-219_ses-itbs_acq-asl_run-01.json
                  Evidence: sub-219_ses-itbs_acq-asl_run-01.json
          /sub-219/ses-itbs/func/sub-219_ses-itbs_acq-asl_run-01.nii.gz
                  Evidence: sub-219_ses-itbs_acq-asl_run-01.nii.gz
          /sub-219/ses-itbs/func/sub-219_ses-itbs_acq-asl_run-02.json
                  Evidence: sub-219_ses-itbs_acq-asl_run-02.json
          /sub-219/ses-itbs/func/sub-219_ses-itbs_acq-asl_run-02.nii.gz
                  Evidence: sub-219_ses-itbs_acq-asl_run-02.nii.gz

  Summary:                  Available Tasks:                     Available Modalities:
  36 Files, 120.19MB        rest                                 T1w
  1 - Subject               TODO: full task name for rest        dwi
  1 - Session                                                    bold
                                                                 fieldmap
                                                                 fieldmap


bet /data/sub-219/ses-itbs/anat/sub-219_ses-itbs_T1w.nii.gz /outputs/sub-219_ses-itbs_brain.nii.gz

Detailed performance metrics for this job will be available at https://metrics.hpc.arizona.edu/#job_viewer?action=show&realm=SUPREMM&resource_id=73&local_job_id=2374948 by 8am on 2021/10/24.

Look at the contents of the Nifti/derivatives directory.

Other Neuroimaging Batch Scripts

Other scripts to run Singularity neuroimaging containers are available on bitbucket and in /groups/dkp/neuroimaging/scripts. Read more about slurm.

BIP

BIP (Bidirectional Iterative Parcellation), runs FSL dwi processing with BedpostX and Probtrackx2. The twist is that BIP runs Probtrackx2 iteratively until the size of the connected grey matter regions stabilizes. This provides a unique and useful characterization of connectivity (the locations and volumes of the connected grey matter regions) not available with other solutions.

Patterson, D. K., Van Petten, C., Beeson, P., Rapcsak, S. Z., & Plante, E. (2014). Bidirectional iterative parcellation of diffusion weighted imaging data: separating cortical regions connected by the arcuate fasciculus and extreme capsule. NeuroImage, 102 Pt 2, 704–716. http://doi.org/10.1016/j.neuroimage.2014.08.032

A detailed description of how to run BIP is availabe as a Readme on the bipbids Bitbucket site. BIP runs in three stages: setup, prep and bip. setup prepares the T1w and dwi images. prep runs eddy, dtifit and bedpostX. bip does the iterating for the selected tracts.

Docker

Any of these steps can be run with a local Docker container: diannepat/bip2. Run docker pull diannepat/bip2 to get the container and download the helpful bash script bip_wrap.sh.

Singularity

To take advantage of GPU processing for the prep and bip steps, you should run the Singularity container.

You can build the Singularity container from the Singularity_bip recipe . See Build Singularity Containers from Recipes.

For more about the GPU code, see Bedpostx_GPU (BedpostX_gpu runs in 5 minutes instead of 12-24 hours for BedpostX) and Probtrackx_GPU (200x faster). The result of running Probtrackx_GPU is slightly different than running probtrackx so don’t mix results from the regular and GPU versions.

A Singularity container is available on the HPC: /contrib/singularity/shared/neuroimaging/bip2.sif.

fMRIPrep

There is lots of online documentation for fMRIPrep.

Note

You need to download a Freesurfer license and make sure your container knows where it is. See The Freesurfer License

Note

Be careful to name sessions with dashes and not underscores. That is itbs-pre will work fine, but itbs_pre will cause you pain in later processing.

Warning

As of 5/14/2020, fMRIPrep has trouble running in parallel on the HPC.

You can rerun fMRIPrep and it’ll find its old output and avoid repeating steps, especially if you have created the -w work directory (the runfmriprep.sh script does this). So, it is not the end of the world if you underestimate the time needed to run.

If you run freesurfer, you can use the same freesurfer output directory with qsiprep (described below) and vice-versa. So, you don’t have to replicate this large directory if you are using both fmriprep and qsiprep.

If you create output surface formats (e.g., gifti), it is helpful to include fsnative as an output type: the fsnative fMRI files created can be overlaid on the T1w gifti files that are generated.

Warning

By default, fMRIprep will not use your fieldmaps to do susceptibility correction UNLESS the phasediff JSON file contains an IntendedFor field pointing to each of the fMRI runs. To create this IntendedFor field after-the-fact, see this helpful description and Docker container: intend4. A Google Cloud Shell tutorial for working with intend4 is available Google cloud shell tutorial: intend4.

Docker

Determine what version of fMRIPrep you have as a Docker container (on your local machine):

docker run --rm  -it nipreps/fmriprep --version

An example run of the fMRIPrep Docker container might look like this:

docker run –rm -it -v /Users/dpat/license.txt:/opt/freesurfer/license.txt:ro -v ${PWD}:/data:ro -v ${PWD}/derivatives:/out nipreps/fmriprep:latest /data /out participant –participant_label 219 -w /out/work –cifti-output

Singularity

Determine which version of fMRIPrep is in the Singularity container:

singularity run --cleanenv /contrib/singularity/shared/neuroimaging/fmriprep.sif --version

A usable Singularity job script is available here: runfmriprep.sh. This should be easy to modify for your own purposes.

  • fMRIPrep needs to know where your freesurfer license is. Ensure the singularity command points to your license and that it is outside of your BIDS directory:

    --fs-license-file /groups/dkpneuroimaging/license.txt
    
  • Create a work directory for fMRIPrep. This is where it will store intermediate steps it can use if it has to start over. Like the freesurfer license, this directory should not be in your BIDS directory:

    -w ${MRIS}/fmriprep_work
    
  • BIDS singularity images are big and take some effort to build (Gory details are in Building Large Containers if you want to do it yourself).

  • Currently, the singularity command in this script points to the fMRIPrep singularity container in /contrib/singularity/shared/neuroimaging:

    /contrib/singularity/shared/neuroimaging/fmriprep.sif
    
  • Permissions should allow you to use containers in this directory freely. If my version of the containers is satisfactory for you, then you do not need to replicate them in your own directory. I am hoping we’ll have a shared community place for these images at some point (other than my directory).

  • You do not NEED to change any other arguments. --stop-on-first-crash is a good idea. You may wish to test with the reduced version that does not run reconall. It is currently commented out, but it’ll succeed or fail faster than the default call.

  • Once you are satified with the changes you’ve made to your script, run your copy of the script like this:

    sbatch --export sub=1012 runfmriprep.sh
    
  • When the job finishes (or crashes), it’ll leave behind a text log, e.g., slurm-2339914.out. You can view a log of the job in the Jobs dropdown on the OOD dashboard.

  • Read this log with Edit in OOD or cat at the command line. It may suggest how many resources you should have allocated to the job (scroll to the bottom of the job file). This can tell you whether you have vastly over or under-estimated. In addition, it’ll provide a log of what the container did, which may help you debug.

  • See BIDS containers for more detail about individual containers.

MRIQC and QMTOOLS

MRIQC is from the Poldracklab, just like fmriprep. MRIQC runs quickly and produces nice reports that can alert you to data quality problems (mostly movement, but a few other issues).

QMTOOLS provides several programs to visualize, compare, and review the image quality metrics (IQMs) produced by the MRIQC program. Visit qmtools support to get started. A Google cloudshell lesson for qmtools is available here. qmtools-latest.sif is available on the HPC, and scripts for using apptainer on the HPC are available in qmtools-support.

Docker

The mriqc Docker container is available on dockerhub:

docker pull nipreps/mriqc

A wrapper script for the Docker container is also available, mriqc_sib_wrap.sh.

Singularity

MRIQC is available on the HPC: /contrib/singularity/shared/neuroimaging/mriqc.sif, along with two scripts that facilitate running it at the participant: /groups/dkp/neuroimaging/scripts/runmriqc.sh and group /groups/dkp/neuroimaging/scripts/runmriqc_group.sh levels. Here is an example of using the script on the HPC. We pass in the subject number:

sbatch --export sub=1012 runmriqc.sh

An participant level mriqc run with one T1w anatomical image and one fMRI file took about 25 minutes of walltime with 2 cpus. A group run with a single subject took just under 4 minutes with 2 cpus.

The HTML reports output by MRIQC can be viewed on OOD by selecting View.

Job Times

MRIQC job times vary by the number of files to be processed. Examples on the HPC are 22 minutes for a T1w image only; 1+ hours for a T1w image and 4 fMRI images.

MRtrix3_connectome

MRtrix3_connectome facilitates running the MRtrix software, which processes DWI images to create a connectome. To determine which version of MRtrix3_connectome you have, you can run the following command on Docker:

docker run --rm bids/mrtrix3_connectome -v

Or, on the HPC, you can run the equivalent Singularity command:

singularity run /contrib/singularity/shared/neuroimaging/mrtrix3_connectome.sif -v

Singularity

  • Here’s a script for running MRtrix3_connectome preprocessing on the HPC runmrtrix3_hcp.sh

QSIprep

QSIprep processes DWI data in an analysis-agnostic way. It is based on the same nipreps principles as fmriprep and mriqc. There are scripts for running preprocessing and reconstruction qsiprep.sif is available in /contrib/singularity/shared/neuroimaging.

As noted above: If you run freesurfer, you can use the same freesurfer output directory with fmriprep. So, you don’t have to replicate this large directory if you are using both fmriprep and qsiprep.