Improving intrapartum surveillance using a pattern recognition and machine learning approach
The development of antenatal and obstetric care in Sweden since the nineteen-seventies has resulted in significantly reduced incidence of perinatal deaths (stillbirths or deaths within the first six days of life) – from 1.4% in 1973 to 0.4% in 2009. However, still 1% of all term singletons have a low Apgar score (a scoring system used to evaluate the clinical status of newborns) at five minutes after birth, indicating that the fetuses were suffering from some degree of asphyxia during labor. Low Apgar score has been shown to be associated with mental retardation and cerebral palsy. Thus, although the impressive achievements of modern obstetric care, a huge improvement potential exists. An even more effective obstetric care during labor could improve the health of thousands of children.
Today, surveillance of fetal condition during labor is made using intermittent or continuous cardiotocogram (CTG). The fetal heart rate (cardio-) is determined by ultrasound, and the uterine contractions (toco-) are measured by a pressure transducer. These numbers are represented on a time scale, producing a graphical representation. However, considerable expertise is required to interpret whether the fetal response to the uterine contractions is adequate, or if the fetal response shows features consistent with fetal exhaustion or asphyxia. E even senior obstetricians tend to disagree when evaluating the same CTG curves.
The basic concept of the started project is to develop an automatic bed-side CTG-interpreting computer program, based on pattern recognition. We will also investigate whether information from other information sources could add to the sensitivity and specificity when identifying fetuses at risk of developing intrapartum asphyxia. By building an efficient data base, combining all information sources, we will also be able to use a machine learning approach.
Machine learning and pattern recognition has become an increasingly important tool in engineering and in particular within signal processing, computer vision and cognitive vision. With the advent of large databases, strong e-science research and novel machine learning techniques it has been possible to push state of the art within automatic classification of data considerably. As new tools develop it is possible to approach and try to analyse new types of problem. One key ingredient here is the possibility to use large data, in the order of 100 000 labors. This gives us a unique potential for training (or parameter estimation) and testing on independent data. Increased amounts of data makes it possible to use more complex models without over-fitting and to be able to verify the methods with greater certainty.
PI: Ass. Prof. Karin Källén
Dept Obstetrics and Gynecology, reproduction epidemiology, IKVL, EpiHealth, Lund
Prof. Karl Åström
Mathematical Sciences, eSSENCE, Lund
Last Updated on