Research Area:  Machine Learning
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
Keywords:  
multimodal fusion
deep learning
deep learning in oncology
AI in oncology
multimodal AI
multimodal integration
Author(s) Name:  Jana Lipkova , Richard J. Chen, Bowen Chen, Ming Y. Lu, Matteo Barbieri, Daniel Shao , Anurag J. Vaidya , Chengkuan Chen , Luoting Zhuang, Drew F.K. Williamson , Muhammad Shaban, Tiffany Y. Chen, Faisal Mahmood
Journal name:  Cancer Cell
Conferrence name:  
Publisher name:  Elsevier
DOI:  10.1016/j.ccell.2022.09.012
Volume Information:  Volume 40, Issue 10, 10 October 2022, Pages 1095-1110
Paper Link:   https://www.sciencedirect.com/science/article/pii/S153561082200441X