Estimating the three-dimensional vertebral orientation from a planar radiograph: Is it feasible?

Fabio Galbusera, Frank Niemeyer, Tito Bassani, Luca Maria Sconfienza, Hans Joachim Wilke

Research output: Contribution to journalArticle

Abstract

We trained a deep neural network for the three-dimensional estimation of the direction of the three anatomical axes (cranio-caudal, anteroposterior and laterolateral) of individual vertebrae from a single sagittal radiographic image acquired from an approximately lateral direction with large deviations from a perfect alignment up to 60 degrees. To this aim, we exploited computed tomography (CT), which can be used to create simulated radiographic projections with different orientations, for the creation of large training and validation datasets. In a set of 21 CT stacks, the location of 5 landmark points was manually determined for L2, L3 and L4, for a total of 63 vertebrae. For each vertebra, 200 simulated projections approximately aligned with sagittal plane but including random perturbations of the projection direction were built, resulting in 12,600 simulated radiographs with the corresponding local directions of the anatomical axes. These data were integrated by 1765 lateral images of vertebrae acquired with a biplanar radiographic imaging system, for which the orientation was calculated by means of three-dimensional reconstruction. The whole dataset was used to train a deep neural network, ResNet-101, customized for the estimation of the three-dimensional components of the axes. The accuracy of the network was qualitatively and quantitatively tested on a large group of simulated radiographic images as well as real lateral images acquired with a biplanar radiographic system for which the direction of the axes was known. Errors were lower than 3 degrees in 76% of the evaluations conducted on the simulated images, and in 86% for the real radiographs. The novel method will be useful to extract three-dimensional information from planar images even in clinical cases in which vertebrae are markedly rotated due to spinal deformities or to an imprecise alignment of the patient with respect to the detector.

Original languageEnglish
Article number109328
JournalJournal of Biomechanics
DOIs
Publication statusAccepted/In press - Jan 1 2019

Fingerprint

Tomography
Spine
Imaging systems
Detectors
Deep neural networks
Direction compound
Datasets

Keywords

  • Deep learning
  • EOS
  • Planar radiograph
  • Pose
  • Vertebral orientation

ASJC Scopus subject areas

  • Biophysics
  • Orthopedics and Sports Medicine
  • Biomedical Engineering
  • Rehabilitation

Cite this

Estimating the three-dimensional vertebral orientation from a planar radiograph : Is it feasible? / Galbusera, Fabio; Niemeyer, Frank; Bassani, Tito; Sconfienza, Luca Maria; Wilke, Hans Joachim.

In: Journal of Biomechanics, 01.01.2019.

Research output: Contribution to journalArticle

@article{afd11ec42e144c22870c728712134632,
title = "Estimating the three-dimensional vertebral orientation from a planar radiograph: Is it feasible?",
abstract = "We trained a deep neural network for the three-dimensional estimation of the direction of the three anatomical axes (cranio-caudal, anteroposterior and laterolateral) of individual vertebrae from a single sagittal radiographic image acquired from an approximately lateral direction with large deviations from a perfect alignment up to 60 degrees. To this aim, we exploited computed tomography (CT), which can be used to create simulated radiographic projections with different orientations, for the creation of large training and validation datasets. In a set of 21 CT stacks, the location of 5 landmark points was manually determined for L2, L3 and L4, for a total of 63 vertebrae. For each vertebra, 200 simulated projections approximately aligned with sagittal plane but including random perturbations of the projection direction were built, resulting in 12,600 simulated radiographs with the corresponding local directions of the anatomical axes. These data were integrated by 1765 lateral images of vertebrae acquired with a biplanar radiographic imaging system, for which the orientation was calculated by means of three-dimensional reconstruction. The whole dataset was used to train a deep neural network, ResNet-101, customized for the estimation of the three-dimensional components of the axes. The accuracy of the network was qualitatively and quantitatively tested on a large group of simulated radiographic images as well as real lateral images acquired with a biplanar radiographic system for which the direction of the axes was known. Errors were lower than 3 degrees in 76{\%} of the evaluations conducted on the simulated images, and in 86{\%} for the real radiographs. The novel method will be useful to extract three-dimensional information from planar images even in clinical cases in which vertebrae are markedly rotated due to spinal deformities or to an imprecise alignment of the patient with respect to the detector.",
keywords = "Deep learning, EOS, Planar radiograph, Pose, Vertebral orientation",
author = "Fabio Galbusera and Frank Niemeyer and Tito Bassani and Sconfienza, {Luca Maria} and Wilke, {Hans Joachim}",
year = "2019",
month = "1",
day = "1",
doi = "10.1016/j.jbiomech.2019.109328",
language = "English",
journal = "Journal of Biomechanics",
issn = "0021-9290",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Estimating the three-dimensional vertebral orientation from a planar radiograph

T2 - Is it feasible?

AU - Galbusera, Fabio

AU - Niemeyer, Frank

AU - Bassani, Tito

AU - Sconfienza, Luca Maria

AU - Wilke, Hans Joachim

PY - 2019/1/1

Y1 - 2019/1/1

N2 - We trained a deep neural network for the three-dimensional estimation of the direction of the three anatomical axes (cranio-caudal, anteroposterior and laterolateral) of individual vertebrae from a single sagittal radiographic image acquired from an approximately lateral direction with large deviations from a perfect alignment up to 60 degrees. To this aim, we exploited computed tomography (CT), which can be used to create simulated radiographic projections with different orientations, for the creation of large training and validation datasets. In a set of 21 CT stacks, the location of 5 landmark points was manually determined for L2, L3 and L4, for a total of 63 vertebrae. For each vertebra, 200 simulated projections approximately aligned with sagittal plane but including random perturbations of the projection direction were built, resulting in 12,600 simulated radiographs with the corresponding local directions of the anatomical axes. These data were integrated by 1765 lateral images of vertebrae acquired with a biplanar radiographic imaging system, for which the orientation was calculated by means of three-dimensional reconstruction. The whole dataset was used to train a deep neural network, ResNet-101, customized for the estimation of the three-dimensional components of the axes. The accuracy of the network was qualitatively and quantitatively tested on a large group of simulated radiographic images as well as real lateral images acquired with a biplanar radiographic system for which the direction of the axes was known. Errors were lower than 3 degrees in 76% of the evaluations conducted on the simulated images, and in 86% for the real radiographs. The novel method will be useful to extract three-dimensional information from planar images even in clinical cases in which vertebrae are markedly rotated due to spinal deformities or to an imprecise alignment of the patient with respect to the detector.

AB - We trained a deep neural network for the three-dimensional estimation of the direction of the three anatomical axes (cranio-caudal, anteroposterior and laterolateral) of individual vertebrae from a single sagittal radiographic image acquired from an approximately lateral direction with large deviations from a perfect alignment up to 60 degrees. To this aim, we exploited computed tomography (CT), which can be used to create simulated radiographic projections with different orientations, for the creation of large training and validation datasets. In a set of 21 CT stacks, the location of 5 landmark points was manually determined for L2, L3 and L4, for a total of 63 vertebrae. For each vertebra, 200 simulated projections approximately aligned with sagittal plane but including random perturbations of the projection direction were built, resulting in 12,600 simulated radiographs with the corresponding local directions of the anatomical axes. These data were integrated by 1765 lateral images of vertebrae acquired with a biplanar radiographic imaging system, for which the orientation was calculated by means of three-dimensional reconstruction. The whole dataset was used to train a deep neural network, ResNet-101, customized for the estimation of the three-dimensional components of the axes. The accuracy of the network was qualitatively and quantitatively tested on a large group of simulated radiographic images as well as real lateral images acquired with a biplanar radiographic system for which the direction of the axes was known. Errors were lower than 3 degrees in 76% of the evaluations conducted on the simulated images, and in 86% for the real radiographs. The novel method will be useful to extract three-dimensional information from planar images even in clinical cases in which vertebrae are markedly rotated due to spinal deformities or to an imprecise alignment of the patient with respect to the detector.

KW - Deep learning

KW - EOS

KW - Planar radiograph

KW - Pose

KW - Vertebral orientation

UR - http://www.scopus.com/inward/record.url?scp=85072089639&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072089639&partnerID=8YFLogxK

U2 - 10.1016/j.jbiomech.2019.109328

DO - 10.1016/j.jbiomech.2019.109328

M3 - Article

AN - SCOPUS:85072089639

JO - Journal of Biomechanics

JF - Journal of Biomechanics

SN - 0021-9290

M1 - 109328

ER -