4-5 AutoGraph and tf.Module

There are three ways of constructing graph: static, dynamic and Autograph.

TensorFlow 2.X uses dynamic graph and Autograph.

Dynamic graph is easier for debugging with higher encoding efficiency, but with lower efficiency in execution.

Static graph has high efficiency in execution, but more difficult for debugging.

Autograph mechanism transforms dynamic graph into static graph, making allowance for both executing and encoding efficiencies.

There are certain rules for the code that is able to converted by Autograph, or it could result in failure or unexpected results.

The coding rules and the mechanisms of Autograph were introduced in the last sections.

In this section, we introduce constructing Autograph using tf.Module.

1. Introduction to Autograph and tf.Module

We mentioned that the definition of tf.Variable should be avoided inside the decorator @tf.function.

However, it would seem to be a imperfect leaked package if we define tf.Variable outside the function, since the function has outside dependency.

One of the simple solutions is: defining a class and place the definition of tf.Variable inside the initial method, and leave the other methods/implementation elsewhere.

After such an ingenious operation, it is so naturally as if the chronic constipation was cured by the best laxative.

The surprise is that TensorFlow providing us a base class tf.Module to get the above naturally. What’s more, It is supposed to be inherited for constructing child classes to manage variables and other Module conveniently. And the most important that it allows us to save model through tf.saved_model and achieve cross-platform deployment. What a surprise!

In fact, tf.keras.models.Model, tf.keras.layers.Layer are both inherited from tf.Module. They provides the management to the variables and the referred sub-modules.

We are able to develop arbitrary learning model (not only neural network) and implement cross-platform deployment through the packaging provided by tf.Module and the low-level APIs in TensorFlow.

2. Packaging Autograph Using tf.Module

We define a simple function。

  1. import tensorflow as tf
  2. x = tf.Variable(1.0,dtype=tf.float32)
  3. # Use input_signature to limit the signature type of the input tensors with shape and dtype inside the decorator tf.function
  4. @tf.function(input_signature=[tf.TensorSpec(shape = [], dtype = tf.float32)])
  5. def add_print(a):
  6. x.assign_add(a)
  7. tf.print(x)
  8. return(x)
  1. add_print(tf.constant(3.0))
  2. #add_print(tf.constant(3)) # Error: argument inconsistent with the tensor signature.
  1. 4

Package using tf.Module.

  1. class DemoModule(tf.Module):
  2. def __init__(self,init_value = tf.constant(0.0),name=None):
  3. super(DemoModule, self).__init__(name=name)
  4. with self.name_scope: # Identical to: with tf.name_scope("demo_module")
  5. self.x = tf.Variable(init_value,dtype = tf.float32,trainable=True)
  6. @tf.function(input_signature=[tf.TensorSpec(shape = [], dtype = tf.float32)])
  7. def addprint(self,a):
  8. with self.name_scope:
  9. self.x.assign_add(a)
  10. tf.print(self.x)
  11. return(self.x)
  1. # Execute
  2. demo = DemoModule(init_value = tf.constant(1.0))
  3. result = demo.addprint(tf.constant(5.0))
  1. 6
  1. # Browse all variables and trainable variables in the module
  2. print(demo.variables)
  3. print(demo.trainable_variables)
  1. (<tf.Variable 'demo_module/Variable:0' shape=() dtype=float32, numpy=6.0>,)
  2. (<tf.Variable 'demo_module/Variable:0' shape=() dtype=float32, numpy=6.0>,)
  1. # Browse all sub-modules
  2. demo.submodules
  1. # Save the model using tf.saved_model and specify the method of cross-platform deployment.
  2. tf.saved_model.save(demo,"../data/demo/1",signatures = {"serving_default":demo.addprint})
  1. # Load the modle
  2. demo2 = tf.saved_model.load("../data/demo/1")
  3. demo2.addprint(tf.constant(5.0))
  1. 11
  1. # Check the info of the model file. The info in the red rectangulars could be used during the deployment and the cross-platform usage.
  2. !saved_model_cli show --dir ../data/demo/1 --all

4-5 AutoGraph and tf.Module - 图1

Check the graph in tensorboard, the module will be added with name demo_module, showing the hierarchy of the graph.

  1. import datetime
  2. # Creating log
  3. stamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
  4. logdir = '../data/demomodule/%s' % stamp
  5. writer = tf.summary.create_file_writer(logdir)
  6. # Start tracing of the Autograph
  7. tf.summary.trace_on(graph=True, profiler=True)
  8. # Execute the Autograph
  9. demo = DemoModule(init_value = tf.constant(0.0))
  10. result = demo.addprint(tf.constant(5.0))
  11. # Write the info of the graph into the log
  12. with writer.as_default():
  13. tf.summary.trace_export(
  14. name="demomodule",
  15. step=0,
  16. profiler_outdir=logdir)
  1. # Magic command to launch tensorboard in jupyter
  2. %reload_ext tensorboard
  1. from tensorboard import notebook
  2. notebook.list()
  1. notebook.start("--logdir ../data/demomodule/")

4-5 AutoGraph and tf.Module - 图2

Besides using the child class of tf.Module, it is also possible to package through adding attribute to tf.Module.

  1. mymodule = tf.Module()
  2. mymodule.x = tf.Variable(0.0)
  3. @tf.function(input_signature=[tf.TensorSpec(shape = [], dtype = tf.float32)])
  4. def addprint(a):
  5. mymodule.x.assign_add(a)
  6. tf.print(mymodule.x)
  7. return (mymodule.x)
  8. mymodule.addprint = addprint
  1. mymodule.addprint(tf.constant(1.0)).numpy()
  1. 1.0
  1. print(mymodule.variables)
  1. (<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>,)
  1. # Save model using tf.saved_model
  2. tf.saved_model.save(mymodule,"../data/mymodule",
  3. signatures = {"serving_default":mymodule.addprint})
  4. # Load the model
  5. mymodule2 = tf.saved_model.load("../data/mymodule")
  6. mymodule2.addprint(tf.constant(5.0))
  1. INFO:tensorflow:Assets written to: ../data/mymodule/assets
  2. 5

3. tf.Module and tf.keras.Modeltf.keras.layers.Layer

The models and the layers in tf.keras are implemented through inheriting tf.Module. Both of them are able to manage variables and sub-modules.

  1. import tensorflow as tf
  2. from tensorflow.keras import models,layers,losses,metrics
  1. print(issubclass(tf.keras.Model,tf.Module))
  2. print(issubclass(tf.keras.layers.Layer,tf.Module))
  3. print(issubclass(tf.keras.Model,tf.keras.layers.Layer))
  1. True
  2. True
  3. True
  1. tf.keras.backend.clear_session()
  2. model = models.Sequential()
  3. model.add(layers.Dense(4,input_shape = (10,)))
  4. model.add(layers.Dense(2))
  5. model.add(layers.Dense(1))
  6. model.summary()
  1. Model: "sequential"
  2. _________________________________________________________________
  3. Layer (type) Output Shape Param #
  4. =================================================================
  5. dense (Dense) (None, 4) 44
  6. _________________________________________________________________
  7. dense_1 (Dense) (None, 2) 10
  8. _________________________________________________________________
  9. dense_2 (Dense) (None, 1) 3
  10. =================================================================
  11. Total params: 57
  12. Trainable params: 57
  13. Non-trainable params: 0
  14. _________________________________________________________________
  1. model.variables
  1. [<tf.Variable 'dense/kernel:0' shape=(10, 4) dtype=float32, numpy=
  2. array([[-0.06741005, 0.45534766, 0.5190817 , -0.01806331],
  3. [-0.14258742, -0.49711505, 0.26030976, 0.18607801],
  4. [-0.62806034, 0.5327399 , 0.42206633, 0.29201728],
  5. [-0.16602087, -0.18901917, 0.55159235, -0.01091868],
  6. [ 0.04533798, 0.326845 , -0.582667 , 0.19431782],
  7. [ 0.6494713 , -0.16174704, 0.4062966 , 0.48760796],
  8. [ 0.58400524, -0.6280886 , -0.11265379, -0.6438277 ],
  9. [ 0.26642334, 0.49275804, 0.20793378, -0.43889117],
  10. [ 0.4092741 , 0.09871006, -0.2073121 , 0.26047975],
  11. [ 0.43910992, 0.00199282, -0.07711256, -0.27966842]],
  12. dtype=float32)>,
  13. <tf.Variable 'dense/bias:0' shape=(4,) dtype=float32, numpy=array([0., 0., 0., 0.], dtype=float32)>,
  14. <tf.Variable 'dense_1/kernel:0' shape=(4, 2) dtype=float32, numpy=
  15. array([[ 0.5022683 , -0.0507431 ],
  16. [-0.61540484, 0.9369011 ],
  17. [-0.14412141, -0.54607415],
  18. [ 0.2027781 , -0.4651153 ]], dtype=float32)>,
  19. <tf.Variable 'dense_1/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>,
  20. <tf.Variable 'dense_2/kernel:0' shape=(2, 1) dtype=float32, numpy=
  21. array([[-0.244825 ],
  22. [-1.2101456]], dtype=float32)>,
  23. <tf.Variable 'dense_2/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)>]
  1. model.layers[0].trainable = False # Freeze the variables in layer 0, make it untrainable.
  2. model.trainable_variables
  1. [<tf.Variable 'dense_1/kernel:0' shape=(4, 2) dtype=float32, numpy=
  2. array([[ 0.5022683 , -0.0507431 ],
  3. [-0.61540484, 0.9369011 ],
  4. [-0.14412141, -0.54607415],
  5. [ 0.2027781 , -0.4651153 ]], dtype=float32)>,
  6. <tf.Variable 'dense_1/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>,
  7. <tf.Variable 'dense_2/kernel:0' shape=(2, 1) dtype=float32, numpy=
  8. array([[-0.244825 ],
  9. [-1.2101456]], dtype=float32)>,
  10. <tf.Variable 'dense_2/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)>]
  1. model.submodules
  1. (<tensorflow.python.keras.engine.input_layer.InputLayer at 0x144d8c080>,
  2. <tensorflow.python.keras.layers.core.Dense at 0x144daada0>,
  3. <tensorflow.python.keras.layers.core.Dense at 0x144d8c5c0>,
  4. <tensorflow.python.keras.layers.core.Dense at 0x144d7aa20>)
  1. model.layers
  1. [<tensorflow.python.keras.layers.core.Dense at 0x144daada0>,
  2. <tensorflow.python.keras.layers.core.Dense at 0x144d8c5c0>,
  3. <tensorflow.python.keras.layers.core.Dense at 0x144d7aa20>]
  1. print(model.name)
  2. print(model.name_scope())
  1. sequential
  2. sequential

Please leave comments in the WeChat official account “Python与算法之美” (Elegance of Python and Algorithms) if you want to communicate with the author about the content. The author will try best to reply given the limited time available.

You are also welcomed to join the group chat with the other readers through replying 加群 (join group) in the WeChat official account.

image.png