You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The MNIST LSTM example does not compile a correct TFLite model with the UnidirectionalSequenceLSTM operator for TFLite Micro. Instead it generates a large graph with many operators.
Versions:
Python 3.12.9 (also tested with 3.11.1)
TensorFlow 2.18.0 (also tested with 2.16.1)
Numpy 2.2.2
Absl 2.1.0
(Windows 10 22H2 and Ubuntu 20.04 LTS through Google Colab)
The problem arises when running the train.py script with new versions of TensorFlow. Whereas older versions (Python 3.10.x and TensorFlow 2.15.x) would generate the correct TFLite Micro operators, the newer versions output a whole graph of operators (see images below). This change leads to the need to resolve many more operators than before (additional ops: Concatenation, Gather, Less, LogicalAnd, Logistic, Mul, Slice, Split, Tanh and While). This subsequently makes the final compiled binary to be deployed larger as well.
Original Graph
New Graph
According to the new LiteRT RNN conversion page, the TFLite converter should "provide native support for standard TensorFlow RNN APIs like Keras LSTM". This sadly does not seem to work at the moment.
Is there any TensorFlow syntax that is missing in the example code, or can it be updated to include the needed versions of libraries or Python? Alternatively, can the example code be edited in order to support the new versions of the TensorFlow libraries and Python? Thanks in advance for any help!
The text was updated successfully, but these errors were encountered:
j-siderius
changed the title
MNIST_LSTM example case does not generate proper
MNIST_LSTM example case does not generate proper operators
Feb 7, 2025
After trying many combinations of Python- and library versions, the problem seems to start from TensorFlow 2.16.0.
The combinations Python 3.10.x and 3.11.x with TensorFlow 2.15.0 and 2.15.1 do produce the expected model structure.
The MNIST LSTM example does not compile a correct TFLite model with the UnidirectionalSequenceLSTM operator for TFLite Micro. Instead it generates a large graph with many operators.
Versions:
The problem arises when running the
train.py
script with new versions of TensorFlow. Whereas older versions (Python 3.10.x and TensorFlow 2.15.x) would generate the correct TFLite Micro operators, the newer versions output a whole graph of operators (see images below). This change leads to the need to resolve many more operators than before (additional ops:Concatenation, Gather, Less, LogicalAnd, Logistic, Mul, Slice, Split, Tanh and While
). This subsequently makes the final compiled binary to be deployed larger as well.According to the new LiteRT RNN conversion page, the TFLite converter should "provide native support for standard TensorFlow RNN APIs like Keras LSTM". This sadly does not seem to work at the moment.
Is there any TensorFlow syntax that is missing in the example code, or can it be updated to include the needed versions of libraries or Python? Alternatively, can the example code be edited in order to support the new versions of the TensorFlow libraries and Python? Thanks in advance for any help!
The text was updated successfully, but these errors were encountered: