diff --git a/tensorflow/contrib/lite/models/testdata/g3doc/README.md b/tensorflow/contrib/lite/models/testdata/g3doc/README.md index 667a5883832914..1c47e00aae2a0e 100644 --- a/tensorflow/contrib/lite/models/testdata/g3doc/README.md +++ b/tensorflow/contrib/lite/models/testdata/g3doc/README.md @@ -53,7 +53,7 @@ with the corresponding parameters as shown in the figure. ### Automatic Speech Recognizer (ASR) Acoustic Model (AM) The acoustic model for automatic speech recognition is the neural network model -for matching phonemes to the input autio features. It generates posterior +for matching phonemes to the input audio features. It generates posterior probabilities of phonemes from speech frontend features (log-mel filterbanks). It has an input size of 320 (float), an output size of 42 (float), five LSTM layers and one fully connected layers with a Softmax activation function, with @@ -68,7 +68,7 @@ for predicting the probability of a word given previous words in a sentence. It generates posterior probabilities of the next word based from a sequence of words. The words are encoded as indices in a fixed size dictionary. The model has two inputs both of size one (integer): the current word index and -next word index, an output size of one (float): the log probability. It consits +next word index, an output size of one (float): the log probability. It consists of three embedding layer, three LSTM layers, followed by a multiplication, a fully connected layers and an addition. The corresponding parameters as shown in the figure. diff --git a/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md b/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md index 4776741ab9273c..5e077952235fa1 100644 --- a/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md +++ b/tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md @@ -229,7 +229,7 @@ additional information about the multiple input arrays: well-formed quantized representation of these graphs. Such graphs should be fixed, but as a temporary work-around, setting this reorder_across_fake_quant flag allows the converter to perform necessary - graph transformaitons on them, at the cost of no longer faithfully matching + graph transformations on them, at the cost of no longer faithfully matching inference and training arithmetic. ### Logging flags