Loading…

A Multimodal Learning Approach for Translating Live Lectures into MOOCs Materials

This paper introduces an AI-based solution for the automatic generation of MOOCs, aiming to efficiently create highly realistic instructional videos while ensuring high-quality content. The generated content strives to keep content accuracy, video fluidity, and vivacity. This paper employs a multimo...

Full description

Saved in:
Bibliographic Details
Main Authors: Huang, Tzu-Chia, Chang, Chih-Yuan, Tsai, Hung-I, Tao, Han-Si
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper introduces an AI-based solution for the automatic generation of MOOCs, aiming to efficiently create highly realistic instructional videos while ensuring high-quality content. The generated content strives to keep content accuracy, video fluidity, and vivacity. This paper employs a multimodal to understand text, images, and sound simultaneously, enhancing the accuracy and realism of video generation. The process involves three stages: First, the preprocessing stage employs OpenAI's Whisper for audio-to-text conversion, supplemented by Fuzzy Wuzzy and Large Language Models (LLMs) to enhance content accuracy and detect thematic sections. In the second stage, speaker motion prediction begins with skeleton tags. Based on these labels, the speaker's motion can be classified into different categories. Subsequently, a multimodal, including BERT and CNN, further extracts features from text and voice diagrams, respectively. Based on these features, the multimodal can learn the speaker's motion categories through the skeleton labels. As a result, the multimodal can predict the classes of the speaker's motions. The final stage generates MOOCs audiovisuals, converting text into subtitles using LLMs and predicting the speaker's motions. Finally, the well-known tool is used to ensure accurate voice and lip synchronization. Based on the mentioned approaches, the proposed mechanism guarantees seamless alignment and consistency in the video elements, thereby ensuring the generated MOOCs can be realistic and more recent.
ISSN:2575-8284
DOI:10.1109/ICCE-Taiwan62264.2024.10674579