Loading…

Multimodal deep learning using on-chip diffractive optics with in situ training capability

Multimodal deep learning plays a pivotal role in supporting the processing and learning of diverse data types within the realm of artificial intelligence generated content (AIGC). However, most photonic neuromorphic processors for deep learning can only handle a single data modality (either vision o...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications 2024-07, Vol.15 (1), p.6189-10, Article 6189
Main Authors: Cheng, Junwei, Huang, Chaoran, Zhang, Jialong, Wu, Bo, Zhang, Wenkai, Liu, Xinyu, Zhang, Jiahui, Tang, Yiyi, Zhou, Hailong, Zhang, Qiming, Gu, Min, Dong, Jianji, Zhang, Xinliang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multimodal deep learning plays a pivotal role in supporting the processing and learning of diverse data types within the realm of artificial intelligence generated content (AIGC). However, most photonic neuromorphic processors for deep learning can only handle a single data modality (either vision or audio) due to the lack of abundant parameter training in optical domain. Here, we propose and demonstrate a trainable diffractive optical neural network (TDONN) chip based on on-chip diffractive optics with massive tunable elements to address these constraints. The TDONN chip includes one input layer, five hidden layers, and one output layer, and only one forward propagation is required to obtain the inference results without frequent optical-electrical conversion. The customized stochastic gradient descent algorithm and the drop-out mechanism are developed for photonic neurons to realize in situ training and fast convergence in the optical domain. The TDONN chip achieves a potential throughput of 217.6 tera-operations per second (TOPS) with high computing density (447.7 TOPS/mm 2 ), high system-level energy efficiency (7.28 TOPS/W), and low optical latency (30.2 ps). The TDONN chip has successfully implemented four-class classification in different modalities (vision, audio, and touch) and achieve 85.7% accuracy on multimodal test sets. Our work opens up a new avenue for multimodal deep learning with integrated photonic processors, providing a potential solution for low-power AI large models using photonic technology. Most photonic processors can only handle a single data modality due to the lack of abundant parameter training in optical domain. Here, authors propose and demonstrate a trainable diffractive optical neural network chip based on on-chip diffractive optics with tunable elements to address these constraints.
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-024-50677-3