Loading…

Research on visual‐tactile cross‐modality based on generative adversarial network

Aiming at the research of assisted blind technology, a generative adversarial network model was proposed to complete the transformation of the mode from vision to touch. Firstly, two key representations of visual to tactile sense are identified: the texture image of the object and the audio frequenc...

Full description

Saved in:
Bibliographic Details
Published in:Cognitive computation and systems 2021-06, Vol.3 (2), p.131-141
Main Authors: Li, Yaoyao, Zhao, Huailin, Liu, Huaping, Lu, Shan, Hou, Yueyang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Aiming at the research of assisted blind technology, a generative adversarial network model was proposed to complete the transformation of the mode from vision to touch. Firstly, two key representations of visual to tactile sense are identified: the texture image of the object and the audio frequency that generates vibrotactile. It is essentially a matter of generating audio from images. The authors propose a cross‐modal network framework that generates corresponding vibrotactile signals based on texture images. More importantly, the network structure is an end‐to‐end, which eliminates the traditional intermediate form of converting texture image to spectrum image, and can directly carry out the transformation from visual to tactile. A quantitative evaluation system is proposed in this study, which can evaluate the performance of the network model. The experimental results show that the network can complete the conversion of visual information to tactile signals. The proposed method is proved to be superior to the existing method of indirectly generating vibrotactile signals, and the applicability of the model is verified.
ISSN:2517-7567
1873-9601
2517-7567
1873-961X
DOI:10.1049/ccs2.12008