Boosted multivariate trees for longitudinal data

Amol Pande, Liang Li, Jeevanantham Rajeswaran, John Ehrlinger, Udaya B. Kogalur, Eugene H. Blackstone, Hemant Ishwaran

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data.

Original languageEnglish (US)
Pages (from-to)1-29
Number of pages29
JournalMachine Learning
DOIs
StateAccepted/In press - Nov 4 2016

Fingerprint

Transplants
Splines
Learning systems
Feature extraction

Keywords

  • Gradient boosting
  • Marginal model
  • Multivariate regression tree
  • P-splines
  • Smoothing parameter

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Pande, A., Li, L., Rajeswaran, J., Ehrlinger, J., Kogalur, U. B., Blackstone, E. H., & Ishwaran, H. (Accepted/In press). Boosted multivariate trees for longitudinal data. Machine Learning, 1-29. https://doi.org/10.1007/s10994-016-5597-1

Boosted multivariate trees for longitudinal data. / Pande, Amol; Li, Liang; Rajeswaran, Jeevanantham; Ehrlinger, John; Kogalur, Udaya B.; Blackstone, Eugene H.; Ishwaran, Hemant.

In: Machine Learning, 04.11.2016, p. 1-29.

Research output: Contribution to journalArticle

Pande, A, Li, L, Rajeswaran, J, Ehrlinger, J, Kogalur, UB, Blackstone, EH & Ishwaran, H 2016, 'Boosted multivariate trees for longitudinal data', Machine Learning, pp. 1-29. https://doi.org/10.1007/s10994-016-5597-1
Pande A, Li L, Rajeswaran J, Ehrlinger J, Kogalur UB, Blackstone EH et al. Boosted multivariate trees for longitudinal data. Machine Learning. 2016 Nov 4;1-29. https://doi.org/10.1007/s10994-016-5597-1
Pande, Amol ; Li, Liang ; Rajeswaran, Jeevanantham ; Ehrlinger, John ; Kogalur, Udaya B. ; Blackstone, Eugene H. ; Ishwaran, Hemant. / Boosted multivariate trees for longitudinal data. In: Machine Learning. 2016 ; pp. 1-29.
@article{04c286ffb3544a38a67c78649743d648,
title = "Boosted multivariate trees for longitudinal data",
abstract = "Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data.",
keywords = "Gradient boosting, Marginal model, Multivariate regression tree, P-splines, Smoothing parameter",
author = "Amol Pande and Liang Li and Jeevanantham Rajeswaran and John Ehrlinger and Kogalur, {Udaya B.} and Blackstone, {Eugene H.} and Hemant Ishwaran",
year = "2016",
month = "11",
day = "4",
doi = "10.1007/s10994-016-5597-1",
language = "English (US)",
pages = "1--29",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer Netherlands",

}

TY - JOUR

T1 - Boosted multivariate trees for longitudinal data

AU - Pande, Amol

AU - Li, Liang

AU - Rajeswaran, Jeevanantham

AU - Ehrlinger, John

AU - Kogalur, Udaya B.

AU - Blackstone, Eugene H.

AU - Ishwaran, Hemant

PY - 2016/11/4

Y1 - 2016/11/4

N2 - Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data.

AB - Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data.

KW - Gradient boosting

KW - Marginal model

KW - Multivariate regression tree

KW - P-splines

KW - Smoothing parameter

UR - http://www.scopus.com/inward/record.url?scp=84994316627&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84994316627&partnerID=8YFLogxK

U2 - 10.1007/s10994-016-5597-1

DO - 10.1007/s10994-016-5597-1

M3 - Article

AN - SCOPUS:84994316627

SP - 1

EP - 29

JO - Machine Learning

JF - Machine Learning

SN - 0885-6125

ER -