Skip to content

Commit

Permalink
minor spelling tweaks for lite docs (tensorflow#16275)
Browse files Browse the repository at this point in the history
  • Loading branch information
brettkoonce authored and rmlarsen committed Jan 22, 2018
1 parent f20f28b commit 90c6308
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions tensorflow/contrib/lite/models/testdata/g3doc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ with the corresponding parameters as shown in the figure.
### Automatic Speech Recognizer (ASR) Acoustic Model (AM)

The acoustic model for automatic speech recognition is the neural network model
for matching phonemes to the input autio features. It generates posterior
for matching phonemes to the input audio features. It generates posterior
probabilities of phonemes from speech frontend features (log-mel filterbanks).
It has an input size of 320 (float), an output size of 42 (float), five LSTM
layers and one fully connected layers with a Softmax activation function, with
Expand All @@ -68,7 +68,7 @@ for predicting the probability of a word given previous words in a sentence.
It generates posterior probabilities of the next word based from a sequence of
words. The words are encoded as indices in a fixed size dictionary.
The model has two inputs both of size one (integer): the current word index and
next word index, an output size of one (float): the log probability. It consits
next word index, an output size of one (float): the log probability. It consists
of three embedding layer, three LSTM layers, followed by a multiplication, a
fully connected layers and an addition.
The corresponding parameters as shown in the figure.
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/lite/toco/g3doc/cmdline_reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ additional information about the multiple input arrays:
well-formed quantized representation of these graphs. Such graphs should be
fixed, but as a temporary work-around, setting this
reorder_across_fake_quant flag allows the converter to perform necessary
graph transformaitons on them, at the cost of no longer faithfully matching
graph transformations on them, at the cost of no longer faithfully matching
inference and training arithmetic.

### Logging flags
Expand Down

0 comments on commit 90c6308

Please sign in to comment.