Detection of Depression from Social Media Posts Using Sentiment Analysis and Random Multimodal Deep Learning

Main Article Content

Amol Govind Patil , N. L. Gavankar

Abstract

Random multimodal deep learning analyzes multiple platforms of information including social media content and text along with pictures and audio to recognize depression symptoms and emotional expressions in social media interactions. This method analyzes different social media datasets including text images and audio to achieve emotion detection and mental health diagnosis purposes. The aim is to evaluate emotional states along with mental health conditions through analyzing and unifying text, image, and audio data obtained from social media platforms. BAF bias-aware filtering works by lowering biases in text as well as image and audio data which results in reliable prediction outcomes. Within the framework, Dynamic Neural Architecture Search (DNAS) performs automated neural network architecture searches which enables improved model performance because it adapts to complex data variety. Through the simulation of gorilla troop cooperation in the Gorilla Troop Optimizer (GTO), users gain better accuracy and efficiency when processing complex multimodal data. A dropout rate of 80% together with a batch size of 85% and a learning rate of 80% combined with a loss function of 85% leads to the highest measured results according to test results through Python software implementation. This future technique has prospects to enhance model precision through improved resources and develop time-sensitive mental health detection systems using varied information sources.


 

Article Details

Section
Articles