This article presents a novel technique for automatic cephalometric landmark localization on 3-dimensional (3D) cone-beam computed tomography (CBCT) volumes by using an active shape model to search for landmarks in related projections.
Twenty-four random CBCT scans from a public data set were imported and processed into Matlab (MathWorks, Natick, Mass). Orthogonal coronal and sagittal projections (digitally reconstructed radiographs) were created, and 2 trained active shape models were used to locate cephalometric landmarks on each. Finally, by relating projections, 18 tridimensional landmarks were located on CBCT volume representations.
From our 3D gold standard, a 3.64-mm mean error in localization of cephalometric landmarks was achieved with this method, with the highest localization errors in the porion and sella regions because of the low volume definition.
The proposed algorithm for automatic 3D landmarking on CBCT volumes seems to be useful for 3D cephalometric analysis. This study shows that a fast 2-dimensional landmark search can be useful for 3D localization, which could save computational time compared with a full-volume analysis. Also, this research confirms that by using CBCT for cephalometry, there are no distortion projections, and full structure information of a virtual patient is manageable in a personal computer.
A model-based algorithm for automatic landmarking on CBCT volumes was investigated.
We scored a 3.6-mm mean error for 3-dimensional landmark localization.
We investigated state-of-the-art cephalometric landmarking for CBCT cephalometry.
As is well known in orthodontics, cephalometry describes the morphology of the craniofacial skeleton and the skull measurements from cephalograms. Cephalograms and digital radiographs are commonly used in conventional cephalometry, but they only provide information on a plane (coronal, sagittal, or axial) from the 3-dimensional (3D) space, unlike cone-beam computed tomography (CBCT) that provides high-resolution images without overlapping or distortion, which results in high-quality diagnostic images. Recently, CBCT was introduced to the dental community as a diagnostic tool and has became a standard imaging technique in orthodontics. This is because tomography scans provide accurate 3D information about the patient’s size and position.
Recent and relevant studies for automated 3D cephalometric landmarking on CBCT volumes used volume analysis instead of surface analysis, where located landmarks are directly annotated in volume voxels. For this research, 4 state-of-the-art studies were identified. The first study by Gupta et al proposed an algorithm to search for 20 cephalometric landmarks in 30 CBCT preprocessed images by grouping search sections of the head. They reported an average accuracy of 2.01 mm, with 64.67% of the landmarks in a range of 0 to 2 mm, 82.67% from 0 to 3 mm, and 90.33% from 0 to 4 mm. Shahidi et al evaluated the design of a software to locate cephalometric landmarks in CBCT. They reported a mean error in 14 landmark localizations under 4 mm, and 63.57% of landmarks had a mean error less than 3 mm compared with manual localization (gold standard). Makram and Kamel proposed a system for automatic localization of 20 hard tissue cephalometric landmarks using reeb graphs in 3D patient meshes where some nodes were considered as cephalometric landmarks. Ninety percent of their landmarks had a localization error less than 2 mm. Codari et al presented a method to automatically locate cephalometric points from volumetric reconstructions. After automatic segmentation of the hard tissue, a nonrigid holistic register was carried out between a reference template volume containing annotated cephalometric points and a study volume on which the points were located. These authors reported an average localization error of 1.99 mm on 21 points in 18 CBCT volumes.
CBCT volumes allows reforming the 3D structure of the skull; then 2-dimensional (2D) conventional simulated x-ray images, or digitally reconstructed radiographs (DRRs), can also be calculated for conventional cephalometry. Therefore, DRRs can be defined as 2D simulated approximations of a radiograph with the advantages of maintaining the patient’s size and position. Then cephalometry can be performed by using more than 1 x-ray image; eg, Moshiri et al presented a cephalometry method using DRRs obtained from CBCT. They compared the accuracy of linear measurements made on conventional digital radiographs and DRRs, concluding that there is no advantage in using conventional radiographs over DRRs for cephalometry.
Cephalometric landmarks have been traditionally studied in 2 dimensions for cephalometry in orthodontics. However, since the head is a 3D structure, it is necessary to locate its position in 3D space to be used in orthodontics. In this study, we explored statistical shape modeling for automatic 3D cephalometric landmarking by relating DRRs using an active shape model (ASM). An ASM is a simple and robust tool for statistical shape analysis. We describe how our ASM is constructed and used for automatic 3D cephalometric landmarking by relating 2 projections.
Material and methods
The sample for this experiment consisted of 24 CBCT head volume scans. No demographic data were available, and the images were not identified by age, sex, or ethnicity. The CBCT images were randomly selected from a public data set: the Virtual Skeleton Database from the Swiss Institute for Computer Assisted Surgery Medical Image Repository. As shown in Table I , 18 cephalometric landmarks were identified by consensus on the sagittal and coronal projections of each volume to train an ASM. To establish the true positions of the selected cephalometric landmarks, manual annotation was independently made twice by 2 observers with varying landmarking experience. From 1 researcher, intraoperator data were obtained by placing 10 times the 18 landmarks on 24 CBCT scans. Means and standard deviations were calculated from the manual annotations for each landmark, and the mean was taken as the anatomic gold standard. To set the 3D ground truth in volumes, multiplanar reconstruction was rendered in 3DSlicer, and landmarks were saved as a list of fiducial points.
|Sella (S)||Midpoint of rim between anterior clinoid process in median plane|
|Nasion (N)||Midsagittal point at junction of frontal and nasal bones at nasofrontal suture|
|Basion (Ba)||Most inferior point on anterior margin of foramen magnum at base of clivus|
|Orbitale (O)||Most inferior point on infraorbital rim, right (OR) and left (OL)|
|Anterior nasal spine (ANS)||Most anterior limit of floor of nose at tip of ANS|
|Posterior nasal spine (PNS)||Point along palate immediately inferior to pterygomaxillary fossa|
|A-point (subspinale)||Most concave point of anterior maxilla|
|B-point (supramentale)||Most concave point on mandibular symphysis|
|Gonion (Go)||A point in the middle of the curvature at the left (GoL) and right (GoR) angles of the mandible|
|Pogonion (Pg)||Most anterior point along curvature of chin|
|Menton (M)||Most inferior point along curvature of chin|
|Porion (Po)||Most superior point of anatomic external auditory meatus, right (PoR) and left (PoL)|
|Gnathion (Gn)||Located perpendicular on madibular symphysis midway between Pg and M|
|Incisor inferior (Ii)||Incisal edge of the most prominent mandibular incisor|
|Incisor superior (Is)||Incisal edge of the most prominent maxillary incisor|
Volumes were provided as DICOM image sets consisting of approximately 320 slices with 0.4-mm isometric voxels. Since the volume data stores density values of the scanned body material in voxels, data were loaded into Matlab (MathWorks, Natick, Mass) without preprocessing to synthetize various types of intensity projections for CBCT volume data, also known as DRRs.
The proposed technique for automatic 3D landmarking in CBCT consists of 3 main steps described in Table II and illustrated in Figure 1 . After loading the DICOM data, the first step is to compute the corresponding DRR in coronal and sagittal projections for each volume. In the second step, the ASMs are initialized by manual 2D localization of sella in the sagittal DRR. In this step, ASMs are adjusted into the bone shapes in the sagittal and coronal DRRs; then 18 landmarks are located in separate perpendicular projections. In the third step, the coronal and sagittal projections are related, in which landmarks located in both projections are used to calculate a 3D point coordinate for each landmark. Finally, 3D landmarks can be visualized on the multiplanar reconstruction representation of the volume. Additionally, Frankfort horizontal, sella-nasion, facial, mandibular, palatal, and basion-nasion planes can also be calculated and shown in the volume.
|Start||Loading DICOM data into 3D viewer||Automated|
|Step 1||Computing coronal and sagittal DRR projections||Automated|
|Step 2||Initializing ASM search by clicking close to sella||Manual|
|Step 3||Coronal and sagittal planes and landmark correlations||Automated|
|End||Definition of 3D cephalometric landmarks in CBCT volume||Automated|
The principle of DRRs as intensity projections is to project all slices within a volume into a single 2D image, directly based on the attenuation absorption law.
, is the original ray value, i
is the voxel through which a ray passes, μ
is the linear attenuation coefficient of the material in voxel i,
is a segment between the entrance and output points of the ray in voxel i
Each pixel of a 2D final image is a combination of all voxels with the same 2D coordinates in every projected image. The way these voxel values are combined is determined by the selected algorithm. We have analyzed projection algorithms that give high contrast and good definition of structures for cephalometric landmark locations. Conventional lateral cephalograms and even DRRs generally suffer from low contrast and visibility of many structures. In this study, we used an effective enhancement method for DRRs by image fusion of different orthogonal projections.
The CBCT slices and their thicknesses were extracted from DICOM meta data images using Matlab. The CBCT volumes we used have a 0.4-mm isotropic voxel resolution. Volumetric reconstruction occurred in real time and provided contiguous color-related perpendicular axial, coronal, and sagittal 2D multiplanar reconstruction slices. Maximum intensity projection and standard deviation intensity projection were calculated and fused under image fusion parameters to give enhanced and natural-looking contrast images without artifacts. This orthogonal DRR projection maintains the real size in measurements by cephalometric landmarks and produces matches to actual CBCT volumes with 0% magnification in comparison with 5% to 8% magnification in conventional perspective radiographs. Figure 2 shows image fusion projections to highlight bone structures. Finally, these images were used to train an ASM.
For this study, the ASM uses statistical deformations of shapes from a training set of hand-annotated images to create a model that will be fitted to unseen x-ray images (conventional or DRR) for automatic cephalometric landmarking. The process implies the manual placement of points in each radiograph following an established path to ensure correspondence in all images. Figure 3 shows the alignment of every annotated projection in the same space, whereas Figure 4 , A and B , shows our template that contains cephalometric landmarks and pseudo landmarks spaced along strong edges that regularly appear on lateral and frontal head radiographs. In total, 70 points were used to define a shape for coronal projections and 95 points to define a shape for sagittal projections. Training images were imported into a created manual cephalometric landmarking program in Matlab, and the landmarks were identified by using a cursor-driven pointer. A custom analysis in our program was developed to assist the observer in identifying and annotating a set of specific structure points and cephalometric landmarks on the images to build our models ( Table I ).
Head structures used to locate cephalometric landmarks are described by n
landmark points (manually located), determined in our set of 24 coronal DRR and 24 sagittal DRR training images. Landmark points (x1,y1),…,(xn,yn)
( x 1 , y 1 ) , … , ( x n , y n )
are grouped as follows.