Sorry, I don't understand your search. ×
Back to Search Start Over

Multi-Modal Forecaster: Jointly Predicting Time Series and Textual Data

Authors :
Kim, Kai
Tsai, Howard
Sen, Rajat
Das, Abhimanyu
Zhou, Zihao
Tanpure, Abhishek
Luo, Mathew
Yu, Rose
Publication Year :
2024

Abstract

Current forecasting approaches are largely unimodal and ignore the rich textual data that often accompany the time series due to lack of well-curated multimodal benchmark dataset. In this work, we develop TimeText Corpus (TTC), a carefully curated, time-aligned text and time dataset for multimodal forecasting. Our dataset is composed of sequences of numbers and text aligned to timestamps, and includes data from two different domains: climate science and healthcare. Our data is a significant contribution to the rare selection of available multimodal datasets. We also propose the Hybrid Multi-Modal Forecaster (Hybrid-MMF), a multimodal LLM that jointly forecasts both text and time series data using shared embeddings. However, contrary to our expectations, our Hybrid-MMF model does not outperform existing baselines in our experiments. This negative result highlights the challenges inherent in multimodal forecasting. Our code and data are available at https://github.com/Rose-STL-Lab/Multimodal_ Forecasting.<br />Comment: 21 pages, 4 tables, 2 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.06735
Document Type :
Working Paper