3-3 High-level API: Demonstration

The examples below use high-level APIs in TensorFlow to implement a linear regression model and a DNN binary classification model.

Typically, the high-level APIs are providing the class interfaces for tf.keras.models.

There are three ways of modeling using APIs of Keras: sequential modeling using Sequential function, arbitrary modeling using API functions, and customized modeling by inheriting base class Model.

Here we are demonstrating using Sequential function and customized modeling by inheriting base class Model, respectively.

  1. import tensorflow as tf
  2. # Time stamp
  3. @tf.function
  4. def printbar():
  5. today_ts = tf.timestamp()%(24*60*60)
  6. hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)
  7. minite = tf.cast((today_ts%3600)//60,tf.int32)
  8. second = tf.cast(tf.floor(today_ts%60),tf.int32)
  9. def timeformat(m):
  10. if tf.strings.length(tf.strings.format("{}",m))==1:
  11. return(tf.strings.format("0{}",m))
  12. else:
  13. return(tf.strings.format("{}",m))
  14. timestring = tf.strings.join([timeformat(hour),timeformat(minite),
  15. timeformat(second)],separator = ":")
  16. tf.print("=========="*8+timestring)

1. Linear Regression Model

In this example, we used Sequential function to construct the model sequentially and use the pre-defined method model.fit for training (for the beginners).

(a) Data Preparation

  1. import numpy as np
  2. import pandas as pd
  3. from matplotlib import pyplot as plt
  4. import tensorflow as tf
  5. from tensorflow.keras import models,layers,losses,metrics,optimizers
  6. # Number of sample
  7. n = 400
  8. # Generating the datasets
  9. X = tf.random.uniform([n,2],minval=-10,maxval=10)
  10. w0 = tf.constant([[2.0],[-3.0]])
  11. b0 = tf.constant([[3.0]])
  12. Y = X@w0 + b0 + tf.random.normal([n,1],mean = 0.0,stddev= 2.0) # @ is matrix multiplication; adding Gaussian noise
  1. # Data Visualization
  2. %matplotlib inline
  3. %config InlineBackend.figure_format = 'svg'
  4. plt.figure(figsize = (12,5))
  5. ax1 = plt.subplot(121)
  6. ax1.scatter(X[:,0],Y[:,0], c = "b")
  7. plt.xlabel("x1")
  8. plt.ylabel("y",rotation = 0)
  9. ax2 = plt.subplot(122)
  10. ax2.scatter(X[:,1],Y[:,0], c = "g")
  11. plt.xlabel("x2")
  12. plt.ylabel("y",rotation = 0)
  13. plt.show()

3-3 High-level API: Demonstration - 图1

(b) Model Definition

  1. tf.keras.backend.clear_session()
  2. model = models.Sequential()
  3. model.add(layers.Dense(1,input_shape =(2,)))
  4. model.summary()
  1. Model: "sequential"
  2. _________________________________________________________________
  3. Layer (type) Output Shape Param #
  4. =================================================================
  5. dense (Dense) (None, 1) 3
  6. =================================================================
  7. Total params: 3
  8. Trainable params: 3
  9. Non-trainable params: 0

(c) Model Training

  1. ### Training using method fit
  2. model.compile(optimizer="adam",loss="mse",metrics=["mae"])
  3. model.fit(X,Y,batch_size = 10,epochs = 200)
  4. tf.print("w = ",model.layers[0].kernel)
  5. tf.print("b = ",model.layers[0].bias)
  1. Epoch 197/200
  2. 400/400 [==============================] - 0s 190us/sample - loss: 4.3977 - mae: 1.7129
  3. Epoch 198/200
  4. 400/400 [==============================] - 0s 172us/sample - loss: 4.3918 - mae: 1.7117
  5. Epoch 199/200
  6. 400/400 [==============================] - 0s 134us/sample - loss: 4.3861 - mae: 1.7106
  7. Epoch 200/200
  8. 400/400 [==============================] - 0s 166us/sample - loss: 4.3786 - mae: 1.7092
  9. w = [[1.99339032]
  10. [-3.00866461]]
  11. b = [2.67018795]
  1. # Visualizing the results
  2. %matplotlib inline
  3. %config InlineBackend.figure_format = 'svg'
  4. w,b = model.variables
  5. plt.figure(figsize = (12,5))
  6. ax1 = plt.subplot(121)
  7. ax1.scatter(X[:,0],Y[:,0], c = "b",label = "samples")
  8. ax1.plot(X[:,0],w[0]*X[:,0]+b[0],"-r",linewidth = 5.0,label = "model")
  9. ax1.legend()
  10. plt.xlabel("x1")
  11. plt.ylabel("y",rotation = 0)
  12. ax2 = plt.subplot(122)
  13. ax2.scatter(X[:,1],Y[:,0], c = "g",label = "samples")
  14. ax2.plot(X[:,1],w[1]*X[:,1]+b[0],"-r",linewidth = 5.0,label = "model")
  15. ax2.legend()
  16. plt.xlabel("x2")
  17. plt.ylabel("y",rotation = 0)
  18. plt.show()

3-3 High-level API: Demonstration - 图2

2. DNN Binary Classification Model

This example demonstrates the customized model using the child class inherited from the base class Model, and use a customized loop for training (for the experts).

(a) Data Preparation

  1. import numpy as np
  2. import pandas as pd
  3. from matplotlib import pyplot as plt
  4. import tensorflow as tf
  5. from tensorflow.keras import layers,losses,metrics,optimizers
  6. %matplotlib inline
  7. %config InlineBackend.figure_format = 'svg'
  8. # Number of the positive/negative samples
  9. n_positive,n_negative = 2000,2000
  10. # Generating the positive samples with a distribution on a smaller ring
  11. r_p = 5.0 + tf.random.truncated_normal([n_positive,1],0.0,1.0)
  12. theta_p = tf.random.uniform([n_positive,1],0.0,2*np.pi)
  13. Xp = tf.concat([r_p*tf.cos(theta_p),r_p*tf.sin(theta_p)],axis = 1)
  14. Yp = tf.ones_like(r_p)
  15. # Generating the negative samples with a distribution on a larger ring
  16. r_n = 8.0 + tf.random.truncated_normal([n_negative,1],0.0,1.0)
  17. theta_n = tf.random.uniform([n_negative,1],0.0,2*np.pi)
  18. Xn = tf.concat([r_n*tf.cos(theta_n),r_n*tf.sin(theta_n)],axis = 1)
  19. Yn = tf.zeros_like(r_n)
  20. # Assembling all samples
  21. X = tf.concat([Xp,Xn],axis = 0)
  22. Y = tf.concat([Yp,Yn],axis = 0)
  23. # Shuffling the samples
  24. data = tf.concat([X,Y],axis = 1)
  25. data = tf.random.shuffle(data)
  26. X = data[:,:2]
  27. Y = data[:,2:]
  28. # Visualizing the data
  29. plt.figure(figsize = (6,6))
  30. plt.scatter(Xp[:,0].numpy(),Xp[:,1].numpy(),c = "r")
  31. plt.scatter(Xn[:,0].numpy(),Xn[:,1].numpy(),c = "g")
  32. plt.legend(["positive","negative"]);

3-3 High-level API: Demonstration - 图3

  1. ds_train = tf.data.Dataset.from_tensor_slices((X[0:n*3//4,:],Y[0:n*3//4,:])) \
  2. .shuffle(buffer_size = 1000).batch(20) \
  3. .prefetch(tf.data.experimental.AUTOTUNE) \
  4. .cache()
  5. ds_valid = tf.data.Dataset.from_tensor_slices((X[n*3//4:,:],Y[n*3//4:,:])) \
  6. .batch(20) \
  7. .prefetch(tf.data.experimental.AUTOTUNE) \
  8. .cache()

(b) Model Definition

  1. tf.keras.backend.clear_session()
  2. class DNNModel(models.Model):
  3. def __init__(self):
  4. super(DNNModel, self).__init__()
  5. def build(self,input_shape):
  6. self.dense1 = layers.Dense(4,activation = "relu",name = "dense1")
  7. self.dense2 = layers.Dense(8,activation = "relu",name = "dense2")
  8. self.dense3 = layers.Dense(1,activation = "sigmoid",name = "dense3")
  9. super(DNNModel,self).build(input_shape)
  10. # Forward propagation
  11. @tf.function(input_signature=[tf.TensorSpec(shape = [None,2], dtype = tf.float32)])
  12. def call(self,x):
  13. x = self.dense1(x)
  14. x = self.dense2(x)
  15. y = self.dense3(x)
  16. return y
  17. model = DNNModel()
  18. model.build(input_shape =(None,2))
  19. model.summary()
  1. Model: "dnn_model"
  2. _________________________________________________________________
  3. Layer (type) Output Shape Param #
  4. =================================================================
  5. dense1 (Dense) multiple 12
  6. _________________________________________________________________
  7. dense2 (Dense) multiple 40
  8. _________________________________________________________________
  9. dense3 (Dense) multiple 9
  10. =================================================================
  11. Total params: 61
  12. Trainable params: 61
  13. Non-trainable params: 0
  14. _________________________________________________________________

(c) Model Training

  1. ### Customizing the training loop
  2. optimizer = optimizers.Adam(learning_rate=0.01)
  3. loss_func = tf.keras.losses.BinaryCrossentropy()
  4. train_loss = tf.keras.metrics.Mean(name='train_loss')
  5. train_metric = tf.keras.metrics.BinaryAccuracy(name='train_accuracy')
  6. valid_loss = tf.keras.metrics.Mean(name='valid_loss')
  7. valid_metric = tf.keras.metrics.BinaryAccuracy(name='valid_accuracy')
  8. @tf.function
  9. def train_step(model, features, labels):
  10. with tf.GradientTape() as tape:
  11. predictions = model(features)
  12. loss = loss_func(labels, predictions)
  13. grads = tape.gradient(loss, model.trainable_variables)
  14. optimizer.apply_gradients(zip(grads, model.trainable_variables))
  15. train_loss.update_state(loss)
  16. train_metric.update_state(labels, predictions)
  17. @tf.function
  18. def valid_step(model, features, labels):
  19. predictions = model(features)
  20. batch_loss = loss_func(labels, predictions)
  21. valid_loss.update_state(batch_loss)
  22. valid_metric.update_state(labels, predictions)
  23. def train_model(model,ds_train,ds_valid,epochs):
  24. for epoch in tf.range(1,epochs+1):
  25. for features, labels in ds_train:
  26. train_step(model,features,labels)
  27. for features, labels in ds_valid:
  28. valid_step(model,features,labels)
  29. logs = 'Epoch={},Loss:{},Accuracy:{},Valid Loss:{},Valid Accuracy:{}'
  30. if epoch%100 ==0:
  31. printbar()
  32. tf.print(tf.strings.format(logs,
  33. (epoch,train_loss.result(),train_metric.result(),valid_loss.result(),valid_metric.result())))
  34. train_loss.reset_states()
  35. valid_loss.reset_states()
  36. train_metric.reset_states()
  37. valid_metric.reset_states()
  38. train_model(model,ds_train,ds_valid,1000)
  1. ================================================================================17:35:02
  2. Epoch=100,Loss:0.194088802,Accuracy:0.923064,Valid Loss:0.215538561,Valid Accuracy:0.904368
  3. ================================================================================17:35:22
  4. Epoch=200,Loss:0.151239693,Accuracy:0.93768847,Valid Loss:0.181166962,Valid Accuracy:0.920664132
  5. ================================================================================17:35:43
  6. Epoch=300,Loss:0.134556711,Accuracy:0.944247484,Valid Loss:0.171530813,Valid Accuracy:0.926396072
  7. ================================================================================17:36:04
  8. Epoch=400,Loss:0.125722557,Accuracy:0.949172914,Valid Loss:0.16731061,Valid Accuracy:0.929318547
  9. ================================================================================17:36:24
  10. Epoch=500,Loss:0.120216407,Accuracy:0.952525079,Valid Loss:0.164817035,Valid Accuracy:0.931044817
  11. ================================================================================17:36:44
  12. Epoch=600,Loss:0.116434008,Accuracy:0.954830289,Valid Loss:0.163089141,Valid Accuracy:0.932202339
  13. ================================================================================17:37:05
  14. Epoch=700,Loss:0.113658346,Accuracy:0.956433,Valid Loss:0.161804497,Valid Accuracy:0.933092058
  15. ================================================================================17:37:25
  16. Epoch=800,Loss:0.111522928,Accuracy:0.957467675,Valid Loss:0.160796657,Valid Accuracy:0.93379426
  17. ================================================================================17:37:46
  18. Epoch=900,Loss:0.109816991,Accuracy:0.958205402,Valid Loss:0.159987748,Valid Accuracy:0.934343576
  19. ================================================================================17:38:06
  20. Epoch=1000,Loss:0.10841465,Accuracy:0.958805501,Valid Loss:0.159325734,Valid Accuracy:0.934785843
  1. # Visualizing the results
  2. fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2,figsize = (12,5))
  3. ax1.scatter(Xp[:,0].numpy(),Xp[:,1].numpy(),c = "r")
  4. ax1.scatter(Xn[:,0].numpy(),Xn[:,1].numpy(),c = "g")
  5. ax1.legend(["positive","negative"]);
  6. ax1.set_title("y_true");
  7. Xp_pred = tf.boolean_mask(X,tf.squeeze(model(X)>=0.5),axis = 0)
  8. Xn_pred = tf.boolean_mask(X,tf.squeeze(model(X)<0.5),axis = 0)
  9. ax2.scatter(Xp_pred[:,0].numpy(),Xp_pred[:,1].numpy(),c = "r")
  10. ax2.scatter(Xn_pred[:,0].numpy(),Xn_pred[:,1].numpy(),c = "g")
  11. ax2.legend(["positive","negative"]);
  12. ax2.set_title("y_pred");

3-3 High-level API: Demonstration - 图4

Please leave comments in the WeChat official account “Python与算法之美” (Elegance of Python and Algorithms) if you want to communicate with the author about the content. The author will try best to reply given the limited time available.

You are also welcomed to join the group chat with the other readers through replying 加群 (join group) in the WeChat official account.

image.png