Background
A fundamental task in most aspects of medical image computing is segmentation, i.e. delineation of anatomical structures of interest for further processing and quantification. Segmentation can be performed fully manually, semi-automatically by initializing an algorithm with limited user input, or fully automatically by an autonomous algorithm. There are a multitude of software tools and algorithms available for each approach, and extensive research has been and is being done in the field.
The struggle of representing the results, however, is an omnipresent issue in every segmentation method. Most commonly segmentation results are stored in 3D binary volumes (labelmaps) that simply indicate whether a volumetric element (voxel) is inside or outside the structure. This representation is also optimal as input for most processing algorithms. However, when visualizing these structures a surface model is the most optimal choice, which is a very different representation of the data: instead of a structured grid of voxels it consists of a point cloud that is connected with triangles that can be visualized in 3D. Besides these two basic representations, several other ones are used. In radiation therapy (RT), the DICOM standard [1] requires the structures to be stored as a series of planar contours. Certain segmentation algorithms yield labelmaps with voxels indicating probabilities instead of a binary decision. Even more obscure representations exist, such as the ribbons, which are sometimes used as a quick way to visualize planar contours.
Unfortunately there are major difficulties involved in the problem of representing and processing anatomical structures, including: 1) Operation: the user needs to be aware of the need for conversion, and how to perform it when needed, 2) Identity: keeping track of the origin of the structures (provenance) and what they represent, 3) Validity: representations may change after conversions, so it is imperative to make sure that no invalid data is accessible at any time, and 4) Coherence: as structure sets typically correspond to the same entity (i.e. a patient), the in-memory objects related to the structures need to form a unified whole.
Software mechanism for dynamic management of anatomic structures
We propose a new data type in 3D Slicer (and before that in the SlicerRT toolkit), as part of a complete mechanism that manages the contained representations and performs conversions as needed. The segmentation node contains multiple structures and multiple representations in the same object (addressing the “Identity” difficulty). Thus it is possible to keep the structures synchronized after any change in the underlying data (“Coherence”). Also, whenever a representation is requested by the application, for example the 3D view trying to access the surface representation of a segmentation for visualization, conversion is performed automatically (“Operation”). The concept of master representation makes sure that no invalid data remains available (“Validity”). The master representation is the one that the data was created in the first place (labelmap when segmenting manually or semi-automatically, or planar contour if loaded from DICOM-RT), and is the source of all conversions. When the master representation changes, e.g. segmentation of an organ is in progress, all other representations are cleared, and re-converted when requested again. The conversion algorithms were carefully chosen and implemented so that they cover the widest variety of datasets, and are driven by a directed graph containing the representations as its nodes and the conversion algorithms as its edges. It attempts to find the computationally cheapest conversion between the master and the requested representation.
Applications
The range of applications of the segmentations infrastructure potentially span every workflow that involves segmented anatomical structures. As the core implementation only depends on the VTK library, a wide variety of software tools in the field of medical image computing are already compatible for adoption. Similarly, in 3D Slicer it is intended to be used for all segmentation-related operations. The new Segment Editor module that was designed and implemented from scratch aspires to be the main module for manual and semi-automatic segmentation, and the successor of the Editor module that has been used in 3D Slicer so far, and which was used as a starting point for the re-design.
Besides the basic need to create segmentations manually and semi-automatically, a wide range of applications can benefit from this system. As importing representations from other data types to segmentation nodes are straightforward, relying on the mechanism is possible even if the data was created using a third party software tool. The field of radiation therapy (RT) was the first target area, due to the numerous representations used during most workflows. Visualizing and analyzing the RT datasets became immensely more straightforward and robust since adopting the mechanism. More specialized use cases have also benefited from its utilization, such as fusion of magnetic resonance and ultrasound images for brachytherapy applications, and evaluation of dosimetric measurements using gel and film dosimeters.