Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Medical SANSformers: Training self supervised transformers without attention for Electronic Medical Records - 2021


Medical SANSformers: Training self supervised transformers without attention for Electronic Medical Records | S-Logix

Research Area:  Machine Learning

Abstract:

The application of Transformer neural networks to Electronic Health Records (EHR) is challenging due to the distinct, multidimensional sequential structure of EHR data, often leading to underperformance when compared to simpler linear models. Thus, the advantages of Transformers, such as efficient transfer learning and improved scalability are not fully exploited in EHR applications. To overcome these challenges, we introduce SANSformer, a novel attention-free sequential model designed specifically with inductive biases to cater for the unique characteristics of EHR data. Our main application area is predicting future healthcare utilization, a crucial task for effectively allocating healthcare resources. This task becomes particularly difficult when dealing with divergent patient subgroups. These subgroups, characterized by unique health trajectories and often small in size, such as patients with rare diseases, require specialized modeling approaches. To address this, we adopt a self-supervised pretraining strategy, which we term Generative Summary Pretraining (GSP). GSP predicts summary statistics of a future window in the patients history based on their past health records, thus demonstrating potential to deal with the noisy and complex nature of EHR data. We pretrain our models on a comprehensive health registry encompassing close to one million patients, before fine-tuning them for specific subgroup prediction tasks. In our evaluations, SANSformer consistently outshines strong EHR baselines. Importantly, our GSP pretraining method greatly enhances model performance, especially for smaller patient subgroups. Our findings underscore the substantial potential of bespoke attention-free models and self-supervised pretraining for enhancing healthcare utilization predictions across a broad range of patient groups.

Keywords:  
Self-Supervised
Forecasting
Electronic Health Records
Attention-Free Models
pretraining

Author(s) Name:  Yogesh Kumar, Alexander Ilin, Henri Salo, Sangita Kulathinal, Maarit K. Leinonen, Pekka Marttinen

Journal name:  Machine Learning

Conferrence name:  

Publisher name:  arXiv:2108.13672

DOI:  10.48550/arXiv.2108.13672

Volume Information: