Replies: 1 comment
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
>>> sokolkk
[August 12, 2019, 12:30pm]
Hi! i can try run tts, but i have some errors, like this below. slash
But i try different models and configs with different git history, but
errors dont leave me
![:frowning:](
> Loading TTS model ... slash
> slash | slash > model config:
> C: slash Users slash SDE-02 slash PycharmProjects slash TTS1 slash config.json slash
> slash | slash > model file: checkpoint_272976.pth.tar slash
> Setting up Audio Processor... slash
> slash | slash > sample_rate:22050 slash
> slash | slash > num_mels:80 slash
> slash | slash > min_level_db:-100 slash
> slash | slash > frame_shift_ms:12.5 slash
> slash | slash > frame_length_ms:50 slash
> slash | slash > ref_level_db:20 slash
> slash | slash > num_freq:1025 slash
> slash | slash > power:1.5 slash
> slash | slash > preemphasis:0.98 slash
> slash | slash > griffin_lim_iters:60 slash
> slash | slash > signal_norm:True slash
> slash | slash > symmetric_norm:False slash
> slash | slash > mel_fmin:0 slash
> slash | slash > mel_fmax:8000.0 slash
> slash | slash > max_norm:1.0 slash
> slash | slash > clip_norm:True slash
> slash | slash > do_trim_silence:True slash
> slash | slash > n_fft:2048 slash
> slash | slash > hop_length:275 slash
> slash | slash > win_length:1102 slash
> Using model: Tacotron2 slash
> Traceback (most recent call last): slash
> File 'C:/Users/SDE-02/PycharmProjects/TTS1/server/test1.py', line 5,
> in slash
> synthesizer = Synthesizer(config) slash
> File
> 'C: slash Users slash SDE-02 slash PycharmProjects slash TTS1 slash server slash synthesizer.py',
> line 29, in init slash
> self.load_tts(self.config.tts_path, self.config.tts_file,
> self.config.tts_config, config.use_cuda) slash
> File
> 'C: slash Users slash SDE-02 slash PycharmProjects slash TTS1 slash server slash synthesizer.py',
> line 61, in load_tts slash
> self.tts_model.load_state_dict(cp slash ['model' slash ]) slash
> File
> 'C: slash Users slash SDE-02 slash AppData slash Local slash Programs slash Python slash Python36 slash lib slash site-packages slash torch slash nn slash modules slash module.py',
> line 845, in load_state_dict slash
> self.class.name, ' slash n slash t'.join(error_msgs))) slash
> RuntimeError: Error(s) in loading state_dict for Tacotron2: slash
> Missing key(s) in state_dict: 'encoder.convolutions.0.net.0.weight',
> 'encoder.convolutions.0.net.0.bias',
> 'encoder.convolutions.0.net.1.weight',
> 'encoder.convolutions.0.net.1.bias',
> 'encoder.convolutions.0.net.1.running_mean',
> 'encoder.convolutions.0.net.1.running_var',
> 'encoder.convolutions.1.net.0.weight',
> 'encoder.convolutions.1.net.0.bias',
> 'encoder.convolutions.1.net.1.weight',
> 'encoder.convolutions.1.net.1.bias',
> 'encoder.convolutions.1.net.1.running_mean',
> 'encoder.convolutions.1.net.1.running_var',
> 'encoder.convolutions.2.net.0.weight',
> 'encoder.convolutions.2.net.0.bias',
> 'encoder.convolutions.2.net.1.weight',
> 'encoder.convolutions.2.net.1.bias',
> 'encoder.convolutions.2.net.1.running_mean',
> 'encoder.convolutions.2.net.1.running_var',
> 'encoder.lstm.weight_ih_l0', 'encoder.lstm.weight_hh_l0',
> 'encoder.lstm.bias_ih_l0', 'encoder.lstm.bias_hh_l0',
> 'encoder.lstm.weight_ih_l0_reverse',
> 'encoder.lstm.weight_hh_l0_reverse',
> 'encoder.lstm.bias_ih_l0_reverse', 'encoder.lstm.bias_hh_l0_reverse',
> 'decoder.prenet.layers.0.linear_layer.weight',
> 'decoder.prenet.layers.1.linear_layer.weight',
> 'decoder.attention_rnn.weight_ih', 'decoder.attention_rnn.weight_hh',
> 'decoder.attention_rnn.bias_ih', 'decoder.attention_rnn.bias_hh',
> 'decoder.attention_layer.query_layer.linear_layer.weight',
> 'decoder.attention_layer.inputs_layer.linear_layer.weight',
> 'decoder.attention_layer.v.linear_layer.weight',
> 'decoder.attention_layer.v.linear_layer.bias',
> 'decoder.decoder_rnn.weight_ih', 'decoder.decoder_rnn.weight_hh',
> 'decoder.decoder_rnn.bias_ih', 'decoder.decoder_rnn.bias_hh',
> 'decoder.linear_projection.linear_layer.weight',
> 'decoder.linear_projection.linear_layer.bias',
> 'decoder.stopnet.1.linear_layer.weight',
> 'decoder.stopnet.1.linear_layer.bias',
> 'decoder.attention_rnn_init.weight', 'decoder.go_frame_init.weight',
> 'decoder.decoder_rnn_inits.weight',
> 'postnet.convolutions.0.net.0.weight',
> 'postnet.convolutions.0.net.0.bias',
> 'postnet.convolutions.0.net.1.weight',
> 'postnet.convolutions.0.net.1.bias',
> 'postnet.convolutions.0.net.1.running_mean',
> 'postnet.convolutions.0.net.1.running_var',
> 'postnet.convolutions.1.net.0.weight',
> 'postnet.convolutions.1.net.0.bias',
> 'postnet.convolutions.1.net.1.weight',
> 'postnet.convolutions.1.net.1.bias',
> 'postnet.convolutions.1.net.1.running_mean',
> 'postnet.convolutions.1.net.1.running_var',
> 'postnet.convolutions.2.net.0.weight',
> 'postnet.convolutions.2.net.0.bias',
> 'postnet.convolutions.2.net.1.weight',
> 'postnet.convolutions.2.net.1.bias',
> 'postnet.convolutions.2.net.1.running_mean',
> 'postnet.convolutions.2.net.1.running_var',
> 'postnet.convolutions.3.net.0.weight',
> 'postnet.convolutions.3.net.0.bias',
> 'postnet.convolutions.3.net.1.weight',
> 'postnet.convolutions.3.net.1.bias',
> 'postnet.convolutions.3.net.1.running_mean',
> 'postnet.convolutions.3.net.1.running_var',
> 'postnet.convolutions.4.net.0.weight',
> 'postnet.convolutions.4.net.0.bias',
> 'postnet.convolutions.4.net.1.weight',
> 'postnet.convolutions.4.net.1.bias',
> 'postnet.convolutions.4.net.1.running_mean',
> 'postnet.convolutions.4.net.1.running_var'. slash
> Unexpected key(s) in state_dict: 'last_linear.weight',
> 'last_linear.bias', 'encoder.prenet.layers.0.weight',
> 'encoder.prenet.layers.0.bias', 'encoder.prenet.layers.1.weight',
> 'encoder.prenet.layers.1.bias',
> 'encoder.cbhg.conv1d_banks.0.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.0.bn.weight',
> 'encoder.cbhg.conv1d_banks.0.bn.bias',
> 'encoder.cbhg.conv1d_banks.0.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.0.bn.running_var',
> 'encoder.cbhg.conv1d_banks.1.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.1.bn.weight',
> 'encoder.cbhg.conv1d_banks.1.bn.bias',
> 'encoder.cbhg.conv1d_banks.1.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.1.bn.running_var',
> 'encoder.cbhg.conv1d_banks.2.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.2.bn.weight',
> 'encoder.cbhg.conv1d_banks.2.bn.bias',
> 'encoder.cbhg.conv1d_banks.2.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.2.bn.running_var',
> 'encoder.cbhg.conv1d_banks.3.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.3.bn.weight',
> 'encoder.cbhg.conv1d_banks.3.bn.bias',
> 'encoder.cbhg.conv1d_banks.3.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.3.bn.running_var',
> 'encoder.cbhg.conv1d_banks.4.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.4.bn.weight',
> 'encoder.cbhg.conv1d_banks.4.bn.bias',
> 'encoder.cbhg.conv1d_banks.4.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.4.bn.running_var',
> 'encoder.cbhg.conv1d_banks.5.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.5.bn.weight',
> 'encoder.cbhg.conv1d_banks.5.bn.bias',
> 'encoder.cbhg.conv1d_banks.5.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.5.bn.running_var',
> 'encoder.cbhg.conv1d_banks.6.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.6.bn.weight',
> 'encoder.cbhg.conv1d_banks.6.bn.bias',
> 'encoder.cbhg.conv1d_banks.6.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.6.bn.running_var',
> 'encoder.cbhg.conv1d_banks.7.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.7.bn.weight',
> 'encoder.cbhg.conv1d_banks.7.bn.bias',
> 'encoder.cbhg.conv1d_banks.7.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.7.bn.running_var',
> 'encoder.cbhg.conv1d_banks.8.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.8.bn.weight',
> 'encoder.cbhg.conv1d_banks.8.bn.bias',
> 'encoder.cbhg.conv1d_banks.8.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.8.bn.running_var',
> 'encoder.cbhg.conv1d_banks.9.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.9.bn.weight',
> 'encoder.cbhg.conv1d_banks.9.bn.bias',
> 'encoder.cbhg.conv1d_banks.9.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.9.bn.running_var',
> 'encoder.cbhg.conv1d_banks.10.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.10.bn.weight',
> 'encoder.cbhg.conv1d_banks.10.bn.bias',
> 'encoder.cbhg.conv1d_banks.10.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.10.bn.running_var',
> 'encoder.cbhg.conv1d_banks.11.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.11.bn.weight',
> 'encoder.cbhg.conv1d_banks.11.bn.bias',
> 'encoder.cbhg.conv1d_banks.11.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.11.bn.running_var',
> 'encoder.cbhg.conv1d_banks.12.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.12.bn.weight',
> 'encoder.cbhg.conv1d_banks.12.bn.bias',
> 'encoder.cbhg.conv1d_banks.12.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.12.bn.running_var',
> 'encoder.cbhg.conv1d_banks.13.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.13.bn.weight',
> 'encoder.cbhg.conv1d_banks.13.bn.bias',
> 'encoder.cbhg.conv1d_banks.13.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.13.bn.running_var',
> 'encoder.cbhg.conv1d_banks.14.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.14.bn.weight',
> 'encoder.cbhg.conv1d_banks.14.bn.bias',
> 'encoder.cbhg.conv1d_banks.14.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.14.bn.running_var',
> 'encoder.cbhg.conv1d_banks.15.conv1d.weight',
> 'encoder.cbhg.conv1d_banks.15.bn.weight',
> 'encoder.cbhg.conv1d_banks.15.bn.bias',
> 'encoder.cbhg.conv1d_banks.15.bn.running_mean',
> 'encoder.cbhg.conv1d_banks.15.bn.running_var',
> 'encoder.cbhg.conv1d_projections.0.conv1d.weight',
> 'encoder.cbhg.conv1d_projections.0.bn.weight',
> 'encoder.cbhg.conv1d_projections.0.bn.bias',
> 'encoder.cbhg.conv1d_projections.0.bn.running_mean',
> 'encoder.cbhg.conv1d_projections.0.bn.running_var',
> 'encoder.cbhg.conv1d_projections.1.conv1d.weight',
> 'encoder.cbhg.conv1d_projections.1.bn.weight',
> 'encoder.cbhg.conv1d_projections.1.bn.bias',
> 'encoder.cbhg.conv1d_projections.1.bn.running_mean',
> 'encoder.cbhg.conv1d_projections.1.bn.running_var',
> 'encoder.cbhg.pre_highway.weight', 'encoder.cbhg.highways.0.H.weight',
> 'encoder.cbhg.highways.0.H.bias', 'encoder.cbhg.highways.0.T.weight',
> 'encoder.cbhg.highways.0.T.bias', 'encoder.cbhg.highways.1.H.weight',
> 'encoder.cbhg.highways.1.H.bias', 'encoder.cbhg.highways.1.T.weight',
> 'encoder.cbhg.highways.1.T.bias', 'encoder.cbhg.highways.2.H.weight',
> 'encoder.cbhg.highways.2.H.bias', 'encoder.cbhg.highways.2.T.weight',
> 'encoder.cbhg.highways.2.T.bias', 'encoder.cbhg.highways.3.H.weight',
> 'encoder.cbhg.highways.3.H.bias', 'encoder.cbhg.highways.3.T.weight',
> 'encoder.cbhg.highways.3.T.bias', 'encoder.cbhg.gru.weight_ih_l0',
> 'encoder.cbhg.gru.weight_hh_l0', 'encoder.cbhg.gru.bias_ih_l0',
> 'encoder.cbhg.gru.bias_hh_l0',
> 'encoder.cbhg.gru.weight_ih_l0_reverse',
> 'encoder.cbhg.gru.weight_hh_l0_reverse',
> 'encoder.cbhg.gru.bias_ih_l0_reverse',
> 'encoder.cbhg.gru.bias_hh_l0_reverse',
> 'decoder.project_to_decoder_in.weight',
> 'decoder.project_to_decoder_in.bias',
> 'decoder.decoder_rnns.0.weight_ih',
> 'decoder.decoder_rnns.0.weight_hh', 'decoder.decoder_rnns.0.bias_ih',
> 'decoder.decoder_rnns.0.bias_hh', 'decoder.decoder_rnns.1.weight_ih',
> 'decoder.decoder_rnns.1.weight_hh', 'decoder.decoder_rnns.1.bias_ih',
> 'decoder.decoder_rnns.1.bias_hh', 'decoder.proj_to_mel.weight',
> 'decoder.proj_to_mel.bias', 'decoder.prenet.layers.0.weight',
> 'decoder.prenet.layers.0.bias', 'decoder.prenet.layers.1.weight',
> 'decoder.prenet.layers.1.bias',
> 'decoder.attention_rnn.rnn_cell.weight_ih',
> 'decoder.attention_rnn.rnn_cell.weight_hh',
> 'decoder.attention_rnn.rnn_cell.bias_ih',
> 'decoder.attention_rnn.rnn_cell.bias_hh',
> 'decoder.attention_rnn.alignment_model.query_layer.weight',
> 'decoder.attention_rnn.alignment_model.query_layer.bias',
> 'decoder.attention_rnn.alignment_model.annot_layer.weight',
> 'decoder.attention_rnn.alignment_model.annot_layer.bias',
> 'decoder.attention_rnn.alignment_model.v.weight',
> 'decoder.stopnet.rnn.weight_ih', 'decoder.stopnet.rnn.weight_hh',
> 'decoder.stopnet.rnn.bias_ih', 'decoder.stopnet.rnn.bias_hh',
> 'decoder.stopnet.linear.weight', 'decoder.stopnet.linear.bias',
> 'postnet.conv1d_banks.0.conv1d.weight',
> 'postnet.conv1d_banks.0.bn.weight', 'postnet.conv1d_banks.0.bn.bias',
> 'postnet.conv1d_banks.0.bn.running_mean',
> 'postnet.conv1d_banks.0.bn.running_var',
> 'postnet.conv1d_banks.1.conv1d.weight',
> 'postnet.conv1d_banks.1.bn.weight', 'postnet.conv1d_banks.1.bn.bias',
> 'postnet.conv1d_banks.1.bn.running_mean',
> 'postnet.conv1d_banks.1.bn.running_var',
> 'postnet.conv1d_banks.2.conv1d.weight',
> 'postnet.conv1d_banks.2.bn.weight', 'postnet.conv1d_banks.2.bn.bias',
> 'postnet.conv1d_banks.2.bn.running_mean',
> 'postnet.conv1d_banks.2.bn.running_var',
> 'postnet.conv1d_banks.3.conv1d.weight',
> 'postnet.conv1d_banks.3.bn.weight', 'postnet.conv1d_banks.3.bn.bias',
> 'postnet.conv1d_banks.3.bn.running_mean',
> 'postnet.conv1d_banks.3.bn.running_var',
> 'postnet.conv1d_banks.4.conv1d.weight',
> 'postnet.conv1d_banks.4.bn.weight', 'postnet.conv1d_banks.4.bn.bias',
> 'postnet.conv1d_banks.4.bn.running_mean',
> 'postnet.conv1d_banks.4.bn.running_var',
> 'postnet.conv1d_banks.5.conv1d.weight',
> 'postnet.conv1d_banks.5.bn.weight', 'postnet.conv1d_banks.5.bn.bias',
> 'postnet.conv1d_banks.5.bn.running_mean',
> 'postnet.conv1d_banks.5.bn.running_var',
> 'postnet.conv1d_banks.6.conv1d.weight',
> 'postnet.conv1d_banks.6.bn.weight', 'postnet.conv1d_banks.6.bn.bias',
> 'postnet.conv1d_banks.6.bn.running_mean',
> 'postnet.conv1d_banks.6.bn.running_var',
> 'postnet.conv1d_banks.7.conv1d.weight',
> 'postnet.conv1d_banks.7.bn.weight', 'postnet.conv1d_banks.7.bn.bias',
> 'postnet.conv1d_banks.7.bn.running_mean',
> 'postnet.conv1d_banks.7.bn.running_var',
> 'postnet.conv1d_projections.0.conv1d.weight',
> 'postnet.conv1d_projections.0.bn.weight',
> 'postnet.conv1d_projections.0.bn.bias',
> 'postnet.conv1d_projections.0.bn.running_mean',
> 'postnet.conv1d_projections.0.bn.running_var',
> 'postnet.conv1d_projections.1.conv1d.weight',
> 'postnet.conv1d_projections.1.bn.weight',
> 'postnet.conv1d_projections.1.bn.bias',
> 'postnet.conv1d_projections.1.bn.running_mean',
> 'postnet.conv1d_projections.1.bn.running_var',
> 'postnet.pre_highway.weight', 'postnet.highways.0.H.weight',
> 'postnet.highways.0.H.bias', 'postnet.highways.0.T.weight',
> 'postnet.highways.0.T.bias', 'postnet.highways.1.H.weight',
> 'postnet.highways.1.H.bias', 'postnet.highways.1.T.weight',
> 'postnet.highways.1.T.bias', 'postnet.highways.2.H.weight',
> 'postnet.highways.2.H.bias', 'postnet.highways.2.T.weight',
> 'postnet.highways.2.T.bias', 'postnet.highways.3.H.weight',
> 'postnet.highways.3.H.bias', 'postnet.highways.3.T.weight',
> 'postnet.highways.3.T.bias', 'postnet.gru.weight_ih_l0',
> 'postnet.gru.weight_hh_l0', 'postnet.gru.bias_ih_l0',
> 'postnet.gru.bias_hh_l0', 'postnet.gru.weight_ih_l0_reverse',
> 'postnet.gru.weight_hh_l0_reverse', 'postnet.gru.bias_ih_l0_reverse',
> 'postnet.gru.bias_hh_l0_reverse'. slash
> size mismatch for embedding.weight: copying a param with shape
> torch.Size( slash [149, 256 slash ]) from checkpoint, the shape in current model
> is torch.Size( slash [130, 512 slash ]).
[This is an archived TTS discussion thread from discourse.mozilla.org/t/problem-with-commit-from-github-and-config-model-help-find-worker-model-and-comit-for-test-tts]
Beta Was this translation helpful? Give feedback.
All reactions