-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for other languages #30
Comments
You'll need to retrain with your own datasets to get another language running (and it's a lot of work). The speaker encoder is somewhat able to work on a few other languages than English because VoxCeleb is not purely English, but since the synthesizer/vocoder have been trained purely on English data, any voice that is not in English - and even, that does not have a proper English accent - will be cloned very poorly. |
Thanks for explaintation, I have big interest of adding other languages support, and would like to contribute. |
You'll need a good dataset (at least ~300 hours, high quality and transcripts) in the language of your choice, do you have that? |
I wanna train another language. How many speakers do I need in the Encoder? or can I use the English speaker embeddings to my language? |
From here:
The first one should be a large dataset of untranscribed audio that can be noisy. Think thousands of speakers and thousands of hours. You can get away with a smaller one if you finetune the pretrained speaker encoder. Put maybe The second one needs audio transcripts and high quality audio. Here, finetuning won't be as effective as for the encoder, but you can get away with less data (300-500 hours). You will likely not have the alignments for that dataset, so you'll have to adapt the preprocessing procedure of the synthesizer to not split audio on silences. See the code and you'll understand what I mean. Don't start training the encoder if you don't have a dataset for the synthesizer/vocoder, you won't be able to do anything then. |
Maybe it can be hacked by using audio book and they pdf2text version. The difficult come i guess from the level of expression on data sources. Maybe with some movies but sometimes subtitles are really poor. Firefox work on dataset to if i remember well |
This is something that I have been slowly piecing together. I have been gathering audiobooks and their text versions that are in the public domain (Project Gutenberg & LibriVox Recordings). My goal as of now is to develop a solid package that can gather an audiofile and corresponding book, performing necessary cleaning and such. Currently this project lives on my C:, but if there's interest in collaboration I'd gladly throw it here on GitHub. |
How many speakers are needed for synthesizer/vocoder training? |
You'd want hundreds of speakers at least. In fact, LibriSpeech-clean makes for 460 speakers and it's still not enough. |
There's an open 12-hour Chinese female voice set from databaker that I tried with tacotron https://github.com/boltomli/tacotron/blob/zh/TRAINING_DATA.md#data-baker-data. Hope that I can gather more Chinese speakers to have a try on voice cloning. I'll update if I have some progress. |
That's not nearly enough to learn about the variations in speakers. Especially not on a hard language such as Chinese. |
@boltomli Take a look at this dataset (1505 hours, 6408 speakers, recorded on smartphones): |
You actually want the encoder dataset not to always be of good quality, because that makes the encoder robust. It's different for the synthesizer/vocoder, because the quality is the output you will have (at best) |
Can not be hack to by create new speakers with ai like it is done for picture ? |
How about training the encoder/speaker_verification using English multi-speaker data-sets, but training the synthesizer using Chinese database, suppose both the data are enough for each individual model separately. |
You can do that, but I would then add the synthesizer dataset in the speaker encoder dataset. In SV2TTS, they use disjoint datasets between the encoder and the synthesizer, but I think it's simply to demonstrate that the speaker encoder generalizes well (the paper is presented as a transfer learning paper over a voice cloning paper after all). There's no guarantee the speaker encoder works well on different languages than it was trained on. Considering the difficulty of generating good Chinese speech, you might want to do your best at finding really good datasets rather than hack your way around everything. |
@CorentinJ Thank you for your reply,may be I should find some Chinese data-sets for ASR to train the speaker verification model. |
@Liujingxiu23 Have you trained a Chinese model?And could you share your model about the Chinese clone results? |
@magneter I have not trained the Chinese model, I don't have enough data to train the speaker verification model, I am trying to collect suitable data now |
@CorentinJ Hello, ignoring speakers out of training dataset, if I only want to assure the quality and similarity of wav synthesized with speakers in the training dataset(librispeech-clean), how much time (at least) for one speaker do I need for training, maybe 20 minutes or less? |
Wouldn't that be wonderful. You'll still need a good week or so. A few hours if you use the pretrained model. Although at this point what you're doing is no longer voice cloning, so you're not really in the right repo for that. |
@zbloss I'm very interested. Would you be able to upload your entire dataset somewhere? Or if it's difficult to upload, is there some way I could acquire it from you directly? Thanks! |
@CorentinJ @yaguangtang @tail95 @zbloss @HumanG33k I am finetuning the encoder model by Chhinese data of 3100 persons. I want to know how to judge whether the train of finetune is OK. In Figure0, The blue line is based on 2100 persons , the yellow line is based on 3100 persons which is trained now. Figure1:(finetune 920k , from 1565k to 1610k steps, based on 2100 persons) Figure2:(finetune 45k from 1565k to 1610k steps, based on 3100 persons) I also what to know how mang steps is OK , in general. Because, I only know to train the synthesizer model and vocoder mode oneby one to judge the effect. But it will cost very long time. How about my EER or Loss ? Look forward your reply! |
If your speakers are cleanly separated in the space (like they are in the pictures), you should be good to go! I'd be interested to compare with the same plots but before any training step was made, to see how the model does on Chinese data. |
did you get around to train the model. I found these datasets in spanish (and many other languages) https://commonvoice.mozilla.org/es/datasets |
Same here! let me know if any news or any help for Spanish |
Hey, I ended up using tacotron2 implementation by NVIDIA. If you train it in spanish, it speaks spanish; so I guess it will |
Hello,
After a long training (especially for the vocoder) the output generated by means of the toolbox is really poor (it can't "speak" italian). Did I do something wrong or I missed some steps? Thank you in advance |
@andreafiandro Check the attention graphs from your synthesizer model training. You should get diagonal lines that look like this if attention has been learned. (This is required for inference to work) https://github.com/Rayhane-mamah/Tacotron-2/wiki/Spectrogram-Feature-prediction-network#tacotron-2-attention If it does not look like that, you'll need additional training for the synthesizer, check the preprocessing for problems, and/or clean your dataset. |
@andreafiandro please, can you share your file trained for italian language? (pretrained.pt of synthetizer) |
Thank you, I have something really different from expected diagonal line: Probably I made some mistake in the data preprocessing or the dataset is too poor. I will try again, checking the results using the plots. Do I need to edit some configuration file in order to the list of character of my language or I can follow the same training step described here? @VitoCostanzo I can share the file if you want but it isn't working for the moment |
@andreafiandro - "Considerations - languages other than English" in #431 (comment) |
Hello i am trying to train the system in spanish |
how to train for turkish ? |
All links to KuangDD's projects now are no longer accessible. I'm currently working on latest fork of this repo to support mandarin and if anyone want to use as reference, please be free to folk and train: https://github.com/babysor/Realtime-Voice-Clone-Chinese |
The original issue has been edited to provide visibility of community-developed voice cloning models in other languages. I'll also use it to keep track of requests. |
this can be done with some audiobooks? |
when will french be done? |
Have you had any luck with training Turkish? |
I've made a custom fork https://github.com/neonsecret/Real-Time-Voice-Cloning-Multilang |
@CorentinJ I am planning to use your pre-trained modules to generate English audio but in my case I want my source audio to be Spanish so I should only worry about training the encoder right? And If I wanted to add emotions to the generated voice does the vocoder supports this? |
@Abdelrahman-Shahda |
@neonsecret Okay great. For the emotion part should I keep extracting the embedding each time rather than once for a single user(I don't know if this will cause the encoder embeddings to vary based on the emotions) |
@Abdelrahman-Shahda I think you should just train as normal, if your emotional audio has exclamation signs in transcript (like "hello!" or "hello!!") you should be fine. |
Hi everyone , i would like to know how much training time does every module requires using GPU (approx.). |
Could you plz share the Chinese encoder model with me? @UESTCgan |
Available languages
Chinese (Mandarin): #811
German: #571*
Swedish: #257*
* Requires Tensorflow 1.x (harder to set up).
Requested languages (not available yet)
Arabic: #871
Czech: #655
English: #388 (UK accent), #429 (Indian accent)
French: #854
Hindi: #525
Italian: #697
Polish: #815
Portuguese: #531
Russian: #707
Spanish: #789
Turkish: #761
Ukrainian: #492
The text was updated successfully, but these errors were encountered: