Coffee Chat Brewing AI Knowledge

eng kor

[MRIQC 4] MRIQC Report and Image Quality Metrics (IQMs)

MRIQC Results

ex1 ex2 ex3

Using MRIQC to analyze magnetic resonance imaging (MRI) images yields a report in HTML format. The report is divided into two main sections:

  1. Basic visual report: View of the background of the anatomical image, Zoomed-in mosaic view of the brain
  2. About: Errors, Reproducibility and provenance information


View of the background of the anatomical image

The extent of artifacts in the background surrounding the brain region on MRI scans is visualized. Here, the background outside the brain is referred to as air. Typically, there is no signal in the air surrounding the head. Any signal detected in this air mask can be considered noise or unusual patterns, known as artifacts, generated during the imaging process. Let’s compare an MRIQC report of a well-acquired T1 weighted image (T1WI) with that of a T1WI with artificially added noise. The noise was introduced using the torchio library to create a ghosting effect.

mosaic_bg_normal1 mosaic_bg_normal2

mosaic_bg_abnormal1 mosaic_bg_abnormal2

The top result is from the well-acquired image (1), and the bottom is from the noise-added image (2). Signal intensity within the slices is indicated by brightness; the stronger the signal, the darker the color. In the first image, the head mask is generally dark, and the air mask is bright, making a clear distinction. In contrast, the second image shows less difference in brightness between the head and air masks, and some head regions appear weaker than the air. A closer look reveals wave-like patterns, indicating the artificially induced ghosting effect. Through this background artifact check, it is possible to qualitatively assess whether the brain region was well-captured without noise interference, ensuring the background is excluded.


Zoomed-in mosaic view of brain

The MRI slices are arranged in order and displayed in a mosaic view. To examine the brain area in detail, the background is mostly excluded, and the images are zoomed in to fit the size of the head mask. Using the mosaic view, we can assess the quality by checking for head motion during the MRI scan, uniformity of image intensity (intensity inhomogeneities), and the presence of global or local noise. Let’s compare the MRIQC report results of the two images used earlier.

mosaic_bg_normal1 mosaic_bg_normal2

mosaic_bg_abnormal1 mosaic_bg_abnormal2

The top result is from image 1, and the bottom is from image 2. Overall, image 1 appears sharper based on the image quality and the distinction between different structures. Regarding head motion, neither image shows significant related issues when reviewing all slices in the mosaic view. However, in image 2, the artificially added ghosting noise is observed within the slices. Wave patterns within the head mask degrade the image quality. By directly examining the images through the mosaic view, we can identify and assess such issues.


Reproducibility and provenance information

To ensure the reproducibility and transparency of the MRIQC report results, provenance information related to quality checks is provided.

Provenance Information

Provenance and reproducibility metadata are provided. This includes information such as the analysis environment (Execution environment), the path of the data used (Input filename), the versions of the packages used (Versions), and the MD5 checksum for file integrity verification (MD5sum).

prov_info

  • Execution environment: The analysis environment. Here, it means that the execution was done in a ‘singularity’ container environment.
  • Input filename: The path of the data used.
  • Versions: The versions of the packages used, such as MRIQC, NiPype, and TemplateFlow.
  • MD5sum: The MD5 checksum for verifying the integrity of the input file.
  • Warnings: ‘large_rot_frame’ indicates whether there were large rotation frames in the image, and ‘small_air_mask’ indicates whether there were small air masks. Both factors can affect the accuracy of image analysis.

Dataset Information

Metadata related to the data used in the analysis is provided.

data_info

  • AcquisitionMatrixPE: The size of the matrix in the encoding direction. In this example, it is 256 x 256.
  • AcquisitionTime: The time the image scan was performed.
  • ConversionSoftware: The software used to convert DICOM to NIfTI. Here, ‘dcm2niix’ was used.
  • ConversionSoftwareVersion: The version of the above conversion software.
  • HeudiconvVersion: The version of Heudiconv used to convert files to BIDS format.
  • ImageOrientationPatientDICOM: Vector information related to the orientation of the patient’s body.
  • ImageType: The type of image, which here means it is a ‘derivative’ image.
  • InstitutionName: The name of the institution where the data originated.
  • Modality: The imaging method. Here, ‘Magnetic Resonance (MR)’ imaging was used.
  • ProtocolName: The name of the protocol used.
  • RawImage: Indicates whether it is a raw image or not.
  • ReconMatrixPE: The size of the reconstructed matrix in the encoding direction. Here, it is 256 x 256.
  • ScanningSequence: The scanning sequence used.
  • SeriesNumber: The series number, used to identify the series to which the dataset belongs.
  • SliceThickness: The thickness of the slices.
  • SpacingBetweenSlice: The spacing between each slice.

Image Quality Metrics

Various Image Quality Metrics (IQMs) scores are reported to quantitatively evaluate the image quality. The metric items vary depending on the image modality.

  • IQMs for structural images: Such as T1WI, T2WI, etc.
  • IQMs for functional images: Such as fMRI-related images, etc.
  • IQMs for diffusion images: Such as DWI, etc.

IQM score results can also be found in the JSON files generated in the MRIQC output directory for each image.

iqm


IQMs for Structural Images

In this example, let’s explore IQMs for structural images, considering the use of T1-weighted imaging (T1WI).

Measures based on noise measurements

  • cjv Coefficient of joint variation (CJV)
    • A measure of relative variation considering two or more variables simultaneously, indicating how much variation of several variables is compared to their mean.
    • It is useful when dealing with datasets that include multiple variables and helps understand overall variability
    • It is calculated as the ratio of the standard deviation of multiple variables to their mean:
    \[CJV={(Standard \ Deviation \ of \ Combined \ Variables)\over(Mean \ of \ Combined \ Variables)}\times100\%\]
    • MRIQC calculates CJV between gray matter (GM) and white matter (WM) of the brain. The CJV of GM and WM serves as the objective function for optimizing the Intensity Non-Uniformity (INU) correction algorithm, as proposed by Ganzetti et al..
      • INU refers to the unevenness in brightness observed across different regions in MRI, often caused by non-uniformity in the magnetic field, especially by variations in radiofrequency (RF) transmission intensity.
      • INU can degrade image accuracy, making interpretation difficult, hence it’s advisable to correct INU for improving MRI quality.
    • A higher CJV implies stronger head motion or larger INU defects, indicating poorer image quality. Therefore, lower CJV values are indicative of better image quality.
  • snr Signal-to-noise ratio (SNR)
    • A measure of the relationship between the strength of the measured signal and the level of surrounding noise, indicating the quality and accuracy of the measured signal. Signal represents the signal observed in the tissue of interest, while noise refers to signals arising from patient motion or electronic interference, among others. SNR is used to distinguish between the two.
    • A higher SNR indicates that the signal of interest is larger compared to the noise, signifying better data quality.
    \[SNR={Signal \ Strength\over Stnadard \ Deviation \ of \ Noise}\]
  • snrd Dietrich’s SNR (SNRd)
    • Calculates SNR with reference to the surrounding air background in MRI, serving as a vital metric for assessing MRI quality. Proposed by Dietrich et al..
    • Since air typically exhibits uniform signal, referencing it allows for a more precise differentiation between signal and noise, thereby enhancing diagnostic accuracy.
    \[SNRd={Signal \ Strength\over Stnadard \ Deviation \ of \ Air Background}\]
  • cnr Contrast-to-noise ratio (CNR)
    • Extends the concept of SNR, representing the relationship between contrast and noise levels in an image. Contrast refers to the brightness difference between structures or objects in an image, while noise refers to irregular or random signals.
    • A Higher CNR indicates lower noise when achieving the desired image contrast, signifying a clearer representation of objects or structures with minimal noise. This facilitates easier interpretation and improves image quality.
    • MRIQC employs CNR to evaluate how well GM and WM are delineated and how easily the image can be interpreted.
    \[CNR={|\mu_{GM}-\mu_{WM}|\over \sqrt{\sigma^2_{GM}+\sigma^2_{wM}}}\]
  • qi_2 Mortamet’s Quality index 2 (QI2)
    • Evaluates the appropriateness of data distribution within the air mask after the removal of artificial intensities. The suitability of data distribution within the air mask region can affect the reliability of image processing and interpretation.
    • Lower values indicate better quality.

Measures based on information theory

  • efc Entropy-focus criterion (EFC)
    • Uses the Shannon entropy of voxel intensities to measure ghosting and blurring caused by head movements. Proposed by Atkinson et al..
    • As ghosting and blurring increase, voxels lose information, causing the Shannon entropy of the voxels to increase. Thus, EFC has higher values with more ghosting and blurring, meaning that lower values indicate better image quality.
    • The formula is normalized by maximum entropy, allowing comparison across images of different dimensions. $p_i$ represents the probability of each voxel, and $N$ represents the number of pixels.
    \[EFC={-\sum^N_i=1 p_i\log_2(p_i) \over \log_2(N)}\]
  • fber Fraction of brain explained by resting-state data (FBER)
    • Compares the mean energy of brain tissue within the image to the mean air value outside the brain, measuring how much brain tissue is included in the image to assess image quality. Proposed by Shehzad et al.
    • It is one of the Quality Assurance Protocol (QAP) metrics.
    \[FBER ={Mean \ energy \ of image \ value \ within \ the \ head \over Mean \ energy \ of image \ value \ outside \ the \ head}\]

Measures targeting specific artifacts

  • inu : Summary statistics of the INU bias field extracted by N4ITK (max, min, median)
    • The N4ITK algorithm is an advanced technique that improves MRI image quality by correcting RF field inhomogeneity.
    • The INU field, or bias field, refers to the field filtered through N4ITK. The quality of an image can be assessed through the statistics of the INU field. Values closer to 0 indicate greater RF field inhomogeneity, while values closer to 1 indicate better correction and higher quality images.
  • qi_1 Mortamet’s Quality index 1 (QI1)
    • An index used to detect artificial intensities on air masks. It is used to properly analyze air masks by removing artificial intensities.
    • It is generally considered an important metric in preprocessing stages of image data, such as MRI, to enhance image quality.
  • wm2max White-matter to maximum intensity ratio
    • The ratio of the median intensity within the WM to the 95th percentile of the overall intensity distribution. This measures the proportion of significant intensities within the WM region.
    • This ratio can reveal when the tail of the intensity distribution is extended, which often occurs due to the intensities from arterial blood vessels or fatty tissue.
    • If the ratio falls outside the range of 0.6 to 0.8, the WM region of the image is considered non-uniform, indicating lower quality.

Other measures

  • fwhm Full width ad half maximum (FWHM)
    • Represents the full width at half maximum of the intensity values’ spatial distribution in an image, used to measure the image’s resolution and sharpness.
    • Determined by the full width value at half the maximum point of the spatial distribution.
    • Lower FWHM values indicate sharper, higher-resolution images.
    • In MRIQC, FWHM is calculated using the Gaussian width estimator filter implemented in AFNI’s 3dWHMx.
  • icvs_* Intracranial volume scaling (ICVS)
    • Intracranial volume (ICV) refers to the total volume of fluid within the cranial membrane surrounding the brain and intracranial fluid. ICVS represents the relative proportion of a specific tissue based on ICV in MRI.
    • In MRIQC, the volume_fraction() function is used to calculate the ICVS for cerebrospinal fluid (CSF), GM, and WM
    • The state of the brain can be assessed by determining whether each ICVS fluctuates within the normal range and whether they maintain ideal ratios to one another.
  • summary_*_*
    • MRIQC’s summary_stats() function provides various statistics related to the pixel distribution in the background, CSF, GM, and WM regions of an MRI. These statistics can be used to evaluate image quality.
    • Includes mean, median, median absolute deviation (MAD), standard deviation, kurtosis, 5th percentile, 95th percentile, and number of voxels.
  • tpm Tissue probability map (TPM)
    • Refers to the probability distribution of brain tissue types (e.g., GM, WM). In MRIQC, it measures the overlap between the estimated TPM from the image and the map of the ICBM nonlinear-asymmetric 2009c template.
    • ICBM nonlinear-asymmetric 2009c template: One of the standard brain maps provided by the International Consortium for Brain Mapping (ICBM).

      A number of unbiased non-linear averages of the MNI152 database have been generated that combines the attractions of both high-spatial resolution and signal-to-noise while not being subject to the vagaries of any single brain (Fonov et al., 2011). … We present an unbiased standard magnetic resonance imaging template brain volume for normal population. These volumes were created using data from ICBM project.

      6 different templates are available: …

      ICBM 2009c Nonlinear Asymmetric template – 1×1x1mm template which includes T1w,T2w,PDw modalities, and tissue probabilities maps. Intensity inhomogeneity was performed using N3 version 1.11 Also included brain mask, eye mask and face mask.Sampling is different from 2009a template. … [Reference]


References


[MRIQC 3-1] Opening an HTML file using Flask

MRIQC analyzes and evaluates the quality of MRI images and outputs a report as an HTML file. To view the HTML file, I used Flask. Here is a summary of the method I used.

Flask

Flask is a micro web framework written in Python. With its lightweight and flexible structure, it helps you quickly develop simple web applications and API servers. Since it includes only basic features, it is highly extensible, allowing you to add various plugins and extension modules as needed. It also has an easy-to-learn and intuitive code structure, making it suitable for beginners. However, because it comes with minimal features, you need to use external libraries to add complex functionalities, and maintaining the project can become challenging as the project size grows.

Opening an HTML file using Flask

Installing Flask

You can install it via PyPI:

pip install Flask

‘static/’ and ‘templates/’

Flask requires two folders, static/ and templates/. static/ stores static files such as images, CSS, and JavaScript that exist in or are applied to HTML files. templates/ stores the HTML files to be rendered.

Let me explain the process of opening an MRIQC report as an example. After creating these two folders in your project folder, save the static files and the HTML file you want to open in each respective folder:

If the file path stored in the HTML file under static/ already exists, it will be updated to the new path. When opening the MRIQC report HTML file, you’ll find that image files are specified with relative paths. Since the images have been moved to the static/ directory, the paths will be changed to absolute paths accordingly:

Writing execution code

And then, write the code to render the HTML file. Here is the code I used in main.py:

from flask import * 

app = Flask(__name__)
@app.route("/")
def test():
    return render_template("sub-001_ses-001_T1w.html")
app.run("0.0.0.0", port=5001)
  • app = Flask(__name__): Creates an instance of a Flask application. __name__ refers to the name of the current module and is used by Flask to locate resources for the application.
  • @app.route("/"): A decorator that instructs Flask to call the test function for the root URL (/).
  • test(): The function that will be executed when the root URL is requested
  • return render_template("HTML_FILE_NAME.html"): Renders and returns the HTML file located in the templates/ directory.
  • app.run("0.0.0.0", port=5001): Runs the application on address 0.0.0.0 and port 5001.

Result

When you visit the specified address, you will see that the HTML file is displayed correctly.


Reference


[MRIQC 3] Running MRIQC: A Step-by-Step Guide using nii2dcm, Heudiconv, and MRIQC

MRIQC analyzes and evaluates the quality of the input MRI images and compiles the relevant information into a report. To use MRIQC, you need MRI images stored in the BIDS format. In this post, I will detail the process of running MRIQC and obtaining analysis results using DICOM files.


nii2dcm

While I used DICOM files here, NIfTI is also a common MRI file format. If you are using NIfTI files, you can use a BIDS converter that supports NIfTI or convert the NIfTI files to DICOM and then use a DICOM-supported BIDS converter. Based on my personal experience, BIDS converters that support NIfTI did not work reliably (though this might have been due to my own mistakes). You can use the nii2dcm library to convert NIfTI files to DICOM. Refer to the code below:

nii2dcm NIFTI_FILE_DIR OUTPUT_DIR -d MR
  • NIFTI_FILE_DIR: Path to the NIfTI file you want to convert
  • OUTPUT_DIR: Path where the converted DICOM files will be saved


Heudiconv

I used Heudiconv as the BIDS converter. I summarized the instructions by referring to the tutorial provided on the official page. Here’s how to use it:

Installing Heudiconv

Install via PyPI:

pip install heudiconv

Adjusting heuristic.py

Write a code to define the rules for saving each image in the BIDS format. You can refer to or modify the heuristic.py file from the data repository provided in the tutorial. This file determines the modality of the input image files and creates file paths that conform to the BIDS format for each modality, saving the images accordingly. Modify the judgment criteria and save paths as necessary.

The function to refer to and modify is infotodict() in heuristic.py.

  • Identify the modality of the images to be used: T1WI, T2WI, DWI, etc.
  • Delete or comment out the code related to unused modalities.
  • Check the path format where the modality images will be saved and modify it if needed.
  • Specify and modify the criteria (dimensions, current filename characteristics, etc.) in the conditional statements to distinguish each modality.

The modified example code is as follows. For T1WI and DWI, the path where the images will be saved and the conditions to determine the image modality have been set.

Running Heudiconv

After installation, set the parameters and run it as follows. Heudiconv can process multiple sets of subject data, i.e., multiple bundles of DICOM files, at once.

heudiconv --files DICOM_FILE_DIRS -o OUTPUT_DIR -f HEURISTIC.PY -s SUB_ID -ss SES_ID -c dcm2niix -b minmeta --overwrite 
  • DICOM_FILE_DIRS: Input the DICOM files for multiple subjects in a globbing format (e.g., dataset/sub-001/ses-001//.dcm)
  • OUTPUT_DIR: Path where the converted BIDS format folder will be saved
  • HEURISTIC.PY: Path to the heuristic.py file created above
  • SUB_ID: Subject id (e.g. 001)
  • SES_ID: Session id (e.g. 001)

Here is an example of how to run it. Enter the following code:

heudiconv --files data/*/*.dcm -o bids/data/ -f heuristic.py -s 0 -ss 0 -c dcm2niix -b minmeta --overwrite 

BIDS format folders will be created under bids/data/ as follows:


MRIQC

Once the MRI images are stored in the BIDS format, they can be input into MRIQC. MRIQC can be used by downloading the package via PyPI or through a Docker container.

With PyPI

First, install it using the following code:

python -m pip install -U mriqc

After installation, run the following code:

mriqc BIDS_ROOT_DIR OUTPUT_DIR participant --participant-label SUB_ID
  • BIDS_ROOT_DIR: Root path of the BIDS format folder
  • OUTPUT_DIR: Path where the MRIQC results will be saved
  • participant OR group: If set to participant, MRIQC analysis results will be obtained per subject; if set to group, MRIQC will analyze all images under the root path.
  • SUB_ID: In participant mode, specify the subject ID for analysis by entering it in --participant-label. Multiple IDs can be entered at once (e.g., --participant-label 001 002 003).

With Docker

I used MRIQC through Docker. The advantage of Docker containers is that they include all dependencies needed to run the program, ensuring a consistent environment. Enter the following code to run MRIQC at the participant level:

docker run -it --rm -v BIDS_ROOT_DIR:/data:ro -v OUTPUT_DIR:/out nipreps/mriqc:latest /data /out participant --participant_label SUB_ID [--verbose-reports]

Even if the nipreps/mriqc image is not downloaded, it will automatically download when you run the code.

  • BIDS_ROOT_DIR: Root path of the BIDS format folder. This is connected to the /data folder inside the container using the -v flag. The ro option stands for ‘read only’, meaning the path can only be read from the local path to the container path.
  • OUTPUT_DIR: Path where the MRIQC results will be saved. This is connected to the /out folder inside the container. If you copy the contents of the /out folder in the container to your local machine, you will see that the results are saved in the OUTPUT_DIR.
    • To copy the internal container files: When running the above docker run command, remove the --rm (remove container after completion) option. After completion, execute docker cp CONTAINER_NAME:FILE_PATH LOCAL_PATH.
  • SUB_ID: Subject ID. Multiple IDs can be entered. (e.g. --participant_label 001 002 003)
  • --verbose-reports (Optional): If this flag is included, four additional plots will be reported along with the default visual report plot.

After running the above code, you can check the list of Docker images and containers to see the MRIQC-related items that have been executed.

docker_ex


MRIQC Results

mriqc_ex1_1

When the MRIQC analysis is complete, the above-mentioned files will appear under the OUTPUT_DIR. Among these, the analysis results are contained in the plot image files within the figures folder, and the JSON and HTML files named after the respective files, such as sub-0_ses-0_T1w.json and sub-0_ses-0_T1w.html in this example. The results report is generated as an HTML file based on the plot images and JSON files.

ex1 ex2 ex3

By opening the HTML file, you can view a report like the one above. By interpreting the report using the visualized plots and quality metric scores, you can determine the quality of the images.


References


[MRIQC 2] Brain Imaging Data Structure (BIDS)

Brain Imaging Data Structure (BIDS)

The Brain Imaging Data Structure (BIDS) was created to streamline the organization and sharing of neuroimaging and behavioral data. The driving force behind BIDS is the need for a standardized format in neuroimaging research to prevent misunderstandings, eliminate the time spent on data reorganization, and improve reproducibility. By offering a straightforward and intuitive structure for data, BIDS aims to promote collaboration, speed up research, and make neuroimaging data more accessible to a diverse range of scientists.

BIDS provides detailed guidelines on how to format and name files, ensuring consistency across studies. It supports various neuroimaging modalities, including MRI, MEG, EEG, and iEEG, and is extensible, allowing for the integration of new data types and metadata. Additionally, BIDS is supported by a growing ecosystem of tools and software that facilitate data validation, analysis, and sharing, further enhancing its utility in the research community.

BIDS Format

BIDS was inspired by the format used by the OpenfMRI repository, which is now known as OpenNeuro. The BIDS format is essentially a method for organizing data and metadata within a hierarchical folder structure. It makes minimal assumptions about the tools needed to interact with the data, allowing for flexibility and broad compatibility. This structure helps standardize data organization, facilitating easier data sharing, analysis, and collaboration within the neuroimaging research community.

fig1

Folders

There are four levels of the folder hierarchy, and all sub-folders except for the root folder have a specific structure to their name. The format and the example names can be described as:

Project/
└─ Subject (e.g. 'sub-01/')
  └─ Session (e.g. 'ses-01/')
    └─ Datatype (e.g. 'anat')
  • Project: contains all the dataset. can have any name.
  • Subject: contains data of one subject. One folder per subject. A subject has a unique label.
    • Name format: sub-PARTICIPANT LABEL
  • Session: represents a recording session. Each subject may have multiple sessions if the data is gathered from several occasions.

    • If there’s only a single session per subject, this level may be omitted.
    • Name format: ses-SESSION LABEL
  • Datatype: represents different types of data.

    fig2

    • anat: anatomical MRI data
    • func: functional MRI data
    • fmap: fieldmap data
    • dwi: diffusion MRI data
    • perf: arterial spin labeling data
    • eeg: electroencephalography data
    • meg: magnetoencephalography data
    • ieeg: intracranial EEG data
    • beh: behavioral data
    • pet: positron emission tomography data
    • micr: microscopy data
    • nirs: near-infrared spectroscopy data
    • motion: motion capture data

Files

Three main types of files:

  • .json file: contains metadata
  • .tsv file: contains tables of metadata
  • Raw data files: e.g. .jpg, .nii.gz

Standardized ways of naming files:

  • Do not include white spaces in file names
  • Use only letters, numbers, hyphens, and underscores.
  • Do not rely on letter case: Some operating systems regard a as the same as A
  • Use separators and case in a systematic and meaningful way.
    • CamelCase or snake_case

Filename template

fig3

Example:

Dataset/
 └─ participants.json
 └─ participants.tsv
 └─ sub-01/
   └─ anat/
     └─ sub-01_t1w.nii.gz
     └─ sub-01_t1w.json
   └─ func/
     └─ sub-01_task-rest_bold.nii.gz
     └─ sub-01_task-rest_bold.json
   └─ dwi/
     └─ sub-01-task-rest_dwi.nii.gz


Reference

[MRIQC 1] MRIQC: Magnetic Resonance Imaging Quality Control

To advance research on MRI images and enhance quality, it is essential to check the condition of the image data and secure high-quality data. However, assessing MRI quality is challenging due to several factors. There are many types of artifacts that can occur during MRI scans, people evaluate image quality differently, and some artifacts are difficult for humans to detect. In this context, an objective MRI quality control (QC) system can be helpful in the early stages of MRI quality assessment. Additionally, the recent trend of acquiring very large image data samples from multiple scanning sites increases the need for fully automated and minimally biased QC protocols.

Magnetic Resonance Imaging Quality Control (MRIQC)

MRIQC (Magnetic Resonance Imaging Quality Control) can be used as an automated tool for assessing MRI quality. MRIQC is an open-source tool designed to evaluate the quality of structural(anatomical) and functional MRI images. MRIQC extracts image quality metrics (IQMs) solely from the input images themselves, without referencing any target images. Additionally, it provides a standardized method for evaluating and comparing MRI scans from various sources or sessions.

Priciples

  • Modular and Integrable: MRIQC uses a modular workflow built on the Nipype framework, integrating various third-party software toolboxes such as ANTs and AFNI​.
  • Minimal Preprocessing: It focuses on minimal preprocessing to estimate IQMs from the original or minimally processed data, ensuring that the quality metrics reflect the raw image data as closely as possible​.
  • Interoperability and Standards: MRIQC adheres to the Brain Imaging Data Structure (BIDS) standard, promoting interoperability and facilitating integration into various neuroimaging workflows​​.
  • Reliability and Robustness: The tool is rigorously tested for robustness against data variability, ensuring consistent performance across different datasets and acquisition parameters​.
  • Visual Reports: MRIQC generates detailed visual reports for both individual images and group analyses. These reports include mosaic views and segmentation contours for individual images, and scatter plots for group analyses to identify outliers​.

Image Quality Metrics (IQMs)

MRIQC computes a range of IQMs categorized into four main groups:

  • Noise-related metrics: Evaluate the impact and characteristics of noise within the images.
  • Information theory-based metrics: Assess the spatial distribution of information using prescribed masks.
  • Artifact detection metrics: Identify and measure the impact of specific artifacts, such as inhomogeneity and motion-related signal leakage.
  • Statistical and morphological metrics: Characterize the statistical properties of tissue distributions and the sharpness/blurriness of images​.

Paper

Esteban O, Birman D, Schaer M, Koyejo OO, Poldrack RA, Gorgolewski KJ (2017) MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites. PLoS ONE 12(9): e0184661. https://doi.org/10.1371/journal.pone.0184661

How to run MRIQC

Interested in running MRIQC? Check out this post for detailed instructions.


References