Adapting speech models for stock price prediction

Frederic Voigt, Jose Alcarez Calero, Keshav Dahal, Qi Wang, Kai Von Luck, Peer Stelldinger

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    5 Downloads (Pure)

    Abstract

    Large language models (LLMs) have demonstrated remarkable success in the field of natural language processing (NLP). Despite their origins in NLP, these algorithms possess the theoretical capability to process any data type represented in an NLP-like format. In this study, we use stock data to illustrate three methodologies for processing regression data with LLMs, employing tokenization and contextualized embeddings. By leveraging the well-known LLM algorithm Bidirectional Encoder Representations from Transformers (BERT) [1], we apply quantitative stock price prediction methodologies to predict stock prices and stock price movements, showcasing the versatility and potential of LLMs in financial data analysis.
    Original languageEnglish
    Title of host publication2024 IEEE 6th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA)
    PublisherIEEE
    Number of pages8
    ISBN (Electronic)9798331505790
    ISBN (Print)9798331505806
    DOIs
    Publication statusPublished - 11 Feb 2025

    Keywords

    • finance
    • quantitative stock price prediction
    • natural language processing
    • stock movement prediction
    • fintech
    • machine learning
    • large language models

    Fingerprint

    Dive into the research topics of 'Adapting speech models for stock price prediction'. Together they form a unique fingerprint.

    Cite this