Many-to-Many Voice Conversion based Feature Disentanglement using Variational Autoencoder

Many-to-Many Voice Conversion based Feature Disentanglement using Variational Autoencoder

Authors: Manh Luong, Viet Anh Tran

INTERSPEECH 2021

AbstractPDFBibtexBlog

Voice conversion is a challenging task which transforms the voice characteristics of a source speaker to a target speaker without changing linguistic content. Recently, there have been many works on many-to-many Voice Conversion (VC) based on Variational Autoencoder (VAEs) achieving good results, however, these methods lack the ability to disentangle speaker identity and linguistic content to achieve good performance on unseen speaker’s scenarios. In this paper, we propose a new method based on feature disentanglement to tackle many-tomany voice conversion. The method has the capability to disentangle speaker identity and linguistic content from utterances, it can convert from many source speakers to many target speakers with a single autoencoder network. Moreover, it naturally deals with the unseen target speaker’s scenarios. We perform both objective and subjective evaluations to show the competitive performance of our proposed method comparing with other stateof-the-art models in terms of naturalness and target speaker similarity

@Article{VCManh2021,
author = “”Manh Luong and Viet Anh Tran””,
title = “”Many-to-Many Voice Conversion based Feature Disentanglement using Variational Autoencoder””,
journal = “”Interspeech””,
year = “”2021″”,
volume = “”””,
pages = “”””,
month = “”to be appeared””
}