Noticing Slow testing steps in --utf8 mode

I am training the model with common voice Arabic data - 12Hrs.

With following parameters:

!python3 \
--train_files data/ar/clips/train.csv \
--dev_files data/ar/clips/dev.csv \
--test_files data/ar/clips/test.csv \
--train_batch_size 1 \
--dev_batch_size 1 \
--test_batch_size 1 \
--summary_dir logs/ \
--export_dir model_1024/ \
--checkpoint_dir checkpoint_dir \
--epochs 20 \
--utf8 \
--learning_rate 0.0001 \
--dropout_rate 0.00 \
--n_hidden 1024 \
--scorer ' '

I am noticing very slow progress with steps while testing.
Please advise, I have tested with normal mode (without --utf8). And seems fast while testing.

Use higher batch sizes for training if your CPU/GPU can do that: 4 or 8

Use dropout of 0.25 or higher. Very important if nothing is moving.

And 12 hrs is very few data, don’t expect much

UTF-8 increases the alphabet size to 256, and since you’re not passing a scorer, it also can’t trim out of vocabulary beams during decoding, so the decoding process gets slower. If you build a scorer it should counteract any slowness you’re seeing.

The out of vocabulary trimming thing is also made worse by UTF-8 because without a scorer the decoder does not trim invalid UTF-8 sequences, so it’s spending time doing useless work.

So even with scorer the utf-8 mode will be slower than normal mode?

No. There’s no universal rule as it depends on what you’re comparing it to. Alphabet mode can be super slow with very large alphabets, for example. With a scorer it should be just as fast in most cases, and much faster if you have a large alphabet.

In general, unless you have a specific issue with alphabet mode that can be fixed by using UTF-8 mode, you should stick to using an alphabet. Some examples of these issues:

  1. My alphabet is too large and makes the model big and slow.
  2. I want to easily train on several languages at once.
  3. I want to easily transfer from one language to another including the final layer.
1 Like

@reuben Thank you for these suggestion. :slight_smile: