Extensions
Introduction
Note
This section explains the types of Kaapana extensions and how they work. For descriptions of available workflow and application extensions, refer to the Workflows and Applications. To learn how to integrate custom components into the platform as extensions, refer to the Developing Applications and Developing Workflows.
The Extension functional unit in Kaapana serves as an app store. It allows users to install/uninstall applications, workflows, and even platforms (experimental feature). Technically, an extension is a Helm chart.
Each extension in the Kaapana repository consists of two folders: docker
and <extension-name>-chart
. For more information about the file structure, refer to the Helm Charts section Advanced: How Kaapana uses Helm.
There are two types of extensions:
Workflow Extensions: Consist of single or multiple executable DAGs in Apache Airflow. After installing a workflow extension, you can see the DAGs available under Workflow Execution menu.
Applications: These provide additional functionalities such as opening a VS Code server, a JupyterLab notebook, or an MITK Workbench instance.
In addition to the distinction in kinds, there is also an distinction in versions, namely stable or experimental. Stable extensions have been tested and maintained, while experimental extensions are not. The filters on the Extensions page allow users to filter extensions based on the version. The extension list is updated in real time based on the selected filters. The Extensions page also displays the current Helm and Kubernetes status of each extension, such as Running
, Completed
, Failed
, or Error
.
Note
Kaapana supports multi-installable extensions, which will have a “Launch” button instead of “Install”. Each time a multi-installable extension is launched, it is deployed as a separate Helm release.
Hint
To install a specific version of an extension, use the dropdown in the version column.
The section Developing Workflows also explains how to write and add your own extensions.
Uploading Extensions to the Platform
Kaapana also provides an experimental upload component for extensions. This allows users to upload both Docker images and Helm charts to the platform. Currently, this component only accepts two file types: “.tar” for exported images and “.tgz” for Helm charts.
This feature is intended to be used by developers who have knowledge about configuring Helm charts and Kubernetes resources. It is strongly recommended to read the following sections before uploading anything to platform: Advanced: How Kaapana uses Helm and Writing Dockerfile
Chart Upload:
Uploaded chart files are checked for basic safety measures, such as whether they are running any resource under the admin namespace.
To create a zipped chart file that can be uploaded to Kaapana, run the following Helm command inside the chart folder:
helm dep up
helm package .
Hint
If the build step is already completed, all the chart tgz files -and their respective folders- should be available under kaapana/build/kaapana-admin-chart/kaapana-extension-collection/charts. The structure should be the same with the DAGs and services already available there.
For any Kubernetes resource yaml inside the templates folder (i.e. deployment, job), the image tag should be referenced correctly (example field that needs to be changed).
Image Upload:
Uploaded images are automatically imported into the microk8s ctr environment (for details see the images import command here) .
A useful command to check if the image is imported with the correct tag into the container runtime is
microk8s ctr images ls | grep <image-tag>
Hint
Since the images uploaded via this component are not already available in a registry, the imagePullPolicy field in the corresponding Kubernetes resource yaml files (example value to be changed) should be changed to IfNotPresent
.
If you have any issues regarding the upload mechanism, check Container Upload Failed.
Extension Parameters
Introduced in version 0.2.0, Extensions support specifying parameters as environment variables. This functionality can be customized according to the requirements of the extension. Some examples of available parameters are task_ID`s for **nnUNet** and the :code:`service_type`
field for MITK Workbench. Parameters can be of type string
, boolean
, single_selectable
, or multi_selectable
. Parameters should be defined in the values.yaml file of the chart. Each of them should follow this structure:
extension_params:
<parameter_name>:
default: <default_value>
definition: "definition of the parameter"
type: oneof (string, bool, list_single, list_multi)
value: <value_entered_by_the_user>
Workflows
nnU-Net (nnunet-predict)
nnU-Net (nnunet-training)
nnU-Net (nnunet-ensemble)
nnU-Net (nnunet-model-management)
TotalSegmentator
Automatic organ segmentation (shapemodel-organ-seg)
Radiomics (radiomics-dcmseg)
MITK Flow
DcmSendOperator
fails when no data is sent.Applications
Code server
/kaapana/app/.vscode/settings.json
.JupyterLab
The JupyterLab is an excellent tool to swiftly analyse data stored to the MinIO object store. It comes preinstalled with a wide array of commonly used Python packages for data analysis. You can deploy multiple instances of JupyterLab simultaneously, each with its dedicated MinIO bucket named after the respective JupyterLab instance. Data stored within this bucket is available to the JupyterLab application through the /minio/jupyterlab directory. You can save your .ipynb analysis-scripts to the directory /minio/analysis-scripts. Files in this directory will be automatically transfered to the MinIO bucket named analysis-scripts and are available to the JupyterlabReportingOperator. While JupyterLab is great for exploratory data analysis, for more complex calculations, consider developing a dedicated Airflow DAG.
MITK Workbench
The MITK Workbench is an instance of MITK running in a container and available to users via Virtual Network Computing (VNC). Multiple instances of MITK can be deployed simultaneously. For each deployment a dedicated MinIO bucket is created, named after the respective MITK instance. To import data into the running MITK container, upload your data to the /input directory within this MinIO bucket. All data stored at this path of the MinIO bucket will be transferred to the /input directory of the MITK container. If you wish to retrieve your results from the MITK application, ensure to save them to the /output directory within the MITK container. Any data placed in this directory will be automatically transferred to the /output directory within the dedicated MinIO bucket.
Slicer Workbench
The Slicer workbench is an instance of 3D Slicer running in a container and available to users via Virtual Network Computing (VNC). Multiple instances of Slicer can be deployed simultaneously. For each deployment a dedicated MinIO bucket is created, named after the respective Slicer instance. To import data into the running Slicer container, upload your data to the /input directory within this MinIO bucket. All data stored at this path of the MinIO bucket will be transferred to the /input directory of the Slicer container. If you wish to retrieve your results from the Slicer application, ensure to save them to the /output directory within the Slicer container. Any data placed in this directory will be automatically transferred to the /output directory within the dedicated MinIO bucket.
Tensorboard
Tensorboard can be launched to analyse results generated during a training. Multiple instances of Tensorboard can be deployed simultaneously. For each deployment a dedicated MinIO bucket is created, named after the respective Tensorboard instance. Data stored within this bucket are available to the Tensorboard application.