AI

PyTorch

PyTorch

Introduction

PyTorch is a machine learning framework based on the Torch library, used for applications such as Computer Vision and Natural Language Processing, originally developed by Meta AI and now part of the Linux Foundation umbrella.

A lot of AI frameworks can be used within MeVisLab. We currently do not provide a preintegrated AI framework though as we try to avoid compatibility issues, and AI frameworks are very fast-moving by nature.

Example 1: Installing PyTorch using the PythonPip module

Example 1: Installing PyTorch using the PythonPip module

Introduction

The module PythonPip allows you to install additional Python packages to be used in MeVisLab.

The module either allows to install packages into the global MeVisLab installation directory, or into your defined user package. We will use the user package directory, because then the installed packages remain available in your packages even if you uninstall or update MeVisLab. In addition to that, no administrative rights are necessary if you did install MeVisLab for all users.

Example 2: Brain Parcellation using PyTorch

Example 2: Brain Parcellation using PyTorch

Introduction

In this example, you are using a pre-trained PyTorch deep learning model (HighRes3DNet) to perform a full brain parcellation. HighRes3DNet is a 3D residual network presented by Li et al. in On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task.

Steps to do

Add a LocalImage module to your workspace and select the file MRI_Head.dcm. For PyTorch it is necessary to resample the data to a defined size. Add a Resample3D module to the LocalImage and open the panel. Change Keep Constant to Voxel Size and define Image Size as 176, 217, 160.

Example 3: Segment persons in webcam videos

Example 3: Segment persons in webcam videos

Introduction

This tutorial is based on Example 2: Face Detection with OpenCV. You can re-use some of the scripts already developed in the other tutorial.

Steps to do

Add the macro module developed in the previous example to your workspace.

WebCamTest module

WebCamTest module

Open the internal network of the module via middle mouse button Middle Mouse Button / Mouse Wheel and right click Right Mouse Button on the tab of the workspace showing the internal network. Select Show Enclosing Folder.

MONAI

MONAI

Introduction

MONAI (Medical Open Network for AI) is an open-source framework built on PyTorch, designed for developing and deploying AI models in medical imaging.

Created by NVIDIA and the Linux Foundation, it provides specialized tools for handling medical data formats like DICOM and NIfTI, along with advanced preprocessing, augmentation, and 3D image analysis capabilities.

MONAI includes ready-to-use deep learning models (such as UNet and SegResNet) and utilities for segmentation, classification, and image registration. It supports distributed GPU training and ensures reproducible research workflows.

Example 1: Installing MONAI using the PythonPip module

Example 1: Installing MONAI using the PythonPip module

Introduction

With the PythonPip module, you can import additional Python libraries into MeVisLab.

Steps to do

Install PyTorch

As MONAI requires PyTorch, install it by using the PythonPip module as described here.

Install MONAI

After installing torch and torchvision, we install MONAI.

For installing MONAI enter "monai" into the Command textbox and press Install.

Install MONAI

Install MONAI

Example 2: Applying a spleen segmentation model from MONAI in MeVisLab

Example 2: Applying a spleen segmentation model from MONAI in MeVisLab

Introduction

In the following, we will perform a spleen segmentation using a model from the MONAI Model Zoo. The MONAI Model Zoo is a collection of pre-trained models for medical imaging, offering standardized bundles for tasks like segmentation, classification, and detection across MRI, CT, and pathology data, all built for easy use and reproducibility within the MONAI framework. Further information and the required files can be found here.