Journal article
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, 2024
APA
Click to copy
Xu, Z., Zhang, J., Greenberg, J. K., Frumkin, M. R., Javeed, S., Zhang, J. K., … Lu, C. (2024). Predicting Multi-dimensional Surgical Outcomes with Multi-modal Mobile Sensing. Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies.
Chicago/Turabian
Click to copy
Xu, Ziqi, Jingwen Zhang, Jacob K Greenberg, Madelyn R. Frumkin, Saad Javeed, Justin K. Zhang, Braeden C. Benedict, et al. “Predicting Multi-Dimensional Surgical Outcomes with Multi-Modal Mobile Sensing.” Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2024).
MLA
Click to copy
Xu, Ziqi, et al. “Predicting Multi-Dimensional Surgical Outcomes with Multi-Modal Mobile Sensing.” Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies, 2024.
BibTeX Click to copy
@article{ziqi2024a,
title = {Predicting Multi-dimensional Surgical Outcomes with Multi-modal Mobile Sensing},
year = {2024},
journal = {Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies},
author = {Xu, Ziqi and Zhang, Jingwen and Greenberg, Jacob K and Frumkin, Madelyn R. and Javeed, Saad and Zhang, Justin K. and Benedict, Braeden C. and Botterbush, Kathleen S and Rodebaugh, T. and Ray, Wilson Z. and Lu, Chenyang}
}
Pre-operative prediction of post-surgical recovery for patients is vital for clinical decision-making and personalized treatments, especially with lumbar spine surgery, where patients exhibit highly heterogeneous outcomes. Existing predictive tools mainly rely on traditional Patient-Reported Outcome Measures (PROMs), which fail to capture the long-term dynamics of patient conditions before the surgery. Moreover, existing studies focus on predicting a single surgical outcome. However, recovery from spine surgery is multi-dimensional, including multiple distinctive but interrelated outcomes, such as pain interference, physical function, and quality of recovery. In recent years, the emergence of smartphones and wearable devices has presented new opportunities to capture longitudinal and dynamic information regarding patients' conditions outside the hospital. This paper proposes a novel machine learning approach, Multi-Modal Multi-Task Learning (M3TL), using smartphones and wristbands to predict multiple surgical outcomes after lumbar spine surgeries. We formulate the prediction of pain interference, physical function, and quality of recovery as a multi-task learning (MTL) problem. We leverage multi-modal data to capture the static and dynamic characteristics of patients, including (1) traditional features from PROMs and Electronic Health Records (EHR), (2) Ecological Momentary Assessment (EMA) collected from smartphones, and (3) sensing data from wristbands. Moreover, we introduce new features derived from the correlation of EMA and wearable features measured within the same time frame, effectively enhancing predictive performance by capturing the interdependencies between the two data modalities. Our model interpretation uncovers the complementary nature of the different data modalities and their distinctive contributions toward multiple surgical outcomes. Furthermore, through individualized decision analysis, our model identifies personal high risk factors to aid clinical decision making and approach personalized treatments. In a clinical study involving 122 patients undergoing lumbar spine surgery, our M3TL model outperforms a diverse set of baseline methods in predictive performance, demonstrating the value of integrating multi-modal data and learning from multiple surgical outcomes. This work contributes to advancing personalized peri-operative care with accurate pre-operative predictions of multi-dimensional outcomes.