4-2 Mathematical Operations of the Tensor

Tensor operation includes structural operation and mathematical operation.

The structural operation includes tensor creation, index slicing, dimension transform, combining & splitting, etc.

The mathematical operation includes scalar operation, vector operation, and matrix operation. We will also introduce the broadcasting mechanism of tensor operation.

This section is about the mathematical operation of tensor.

1. Scalar Operation

The mathematical operation includes scalar operation, vector operation, and matrix operation.

The scalar operation includes add, subtract, multiply, divide, power, and trigonometric functions, exponential functions, log functions, and logical comparison, etc.

The scalar operation is an element-by-element operation.

Some of the scalar operators are overloaded from the normal mathematical operators and support broadcasting similar as numpy.

Most scalar operators are under the module tf.math.

  1. import tensorflow as tf
  2. import numpy as np
  1. a = tf.constant([[1.0,2],[-3,4.0]])
  2. b = tf.constant([[5.0,6],[7.0,8.0]])
  3. a+b # Operator overloading
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[ 6., 8.],
  3. [ 4., 12.]], dtype=float32)>
  1. a-b
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[ -4., -4.],
  3. [-10., -4.]], dtype=float32)>
  1. a*b
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[ 5., 12.],
  3. [-21., 32.]], dtype=float32)>
  1. a/b
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[ 0.2 , 0.33333334],
  3. [-0.42857143, 0.5 ]], dtype=float32)>
  1. a**2
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[ 1., 4.],
  3. [ 9., 16.]], dtype=float32)>
  1. a**(0.5)
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[1. , 1.4142135],
  3. [ nan, 2. ]], dtype=float32)>
  1. a%3 # Reloading of mod operator, identical to: m = tf.math.mod(a,3)
  1. <tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 2, 0], dtype=int32)>
  1. a//3 # Divid and round towards negative infinity
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[ 0., 0.],
  3. [-1., 1.]], dtype=float32)>
  1. (a>=2)
  1. <tf.Tensor: shape=(2, 2), dtype=bool, numpy=
  2. array([[False, True],
  3. [False, True]])>
  1. (a>=2)&(a<=3)
  1. <tf.Tensor: shape=(2, 2), dtype=bool, numpy=
  2. array([[False, True],
  3. [False, False]])>
  1. (a>=2)|(a<=3)
  1. <tf.Tensor: shape=(2, 2), dtype=bool, numpy=
  2. array([[ True, True],
  3. [ True, True]])>
  1. a==5 #tf.equal(a,5)
  1. <tf.Tensor: shape=(3,), dtype=bool, numpy=array([False, False, False])>
  1. tf.sqrt(a)
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[1. , 1.4142135],
  3. [ nan, 2. ]], dtype=float32)>
  1. a = tf.constant([1.0,8.0])
  2. b = tf.constant([5.0,6.0])
  3. c = tf.constant([6.0,7.0])
  4. tf.add_n([a,b,c])
  1. <tf.Tensor: shape=(2,), dtype=float32, numpy=array([12., 21.], dtype=float32)>
  1. tf.print(tf.maximum(a,b))
  1. [5 8]
  1. tf.print(tf.minimum(a,b))
  1. [1 6]

2. Vector Operation

Vector operation manipulate along one specific axis. It projects one vector to a scalar or another vector. Many names of vector operator starts with “reduce”.

  1. # Vector "reduce"
  2. a = tf.range(1,10)
  3. tf.print(tf.reduce_sum(a))
  4. tf.print(tf.reduce_mean(a))
  5. tf.print(tf.reduce_max(a))
  6. tf.print(tf.reduce_min(a))
  7. tf.print(tf.reduce_prod(a))
  1. 45
  2. 5
  3. 9
  4. 1
  5. 362880
  1. # "reduce" along the specific dimension
  2. b = tf.reshape(a,(3,3))
  3. tf.print(tf.reduce_sum(b, axis=1, keepdims=True))
  4. tf.print(tf.reduce_sum(b, axis=0, keepdims=True))
  1. [[6]
  2. [15]
  3. [24]]
  4. [[12 15 18]]
  1. # "reduce" for bool type
  2. p = tf.constant([True,False,False])
  3. q = tf.constant([False,False,True])
  4. tf.print(tf.reduce_all(p))
  5. tf.print(tf.reduce_any(q))
  1. 0
  2. 1
  1. # Implement tf.reduce_sum using tf.foldr
  2. s = tf.foldr(lambda a,b:a+b,tf.range(10))
  3. tf.print(s)
  1. 45
  1. # Cumulative sum
  2. a = tf.range(1,10)
  3. tf.print(tf.math.cumsum(a))
  4. tf.print(tf.math.cumprod(a))
  1. [1 3 6 ... 28 36 45]
  2. [1 2 6 ... 5040 40320 362880]
  1. # Index of max and min values in the arguments
  2. a = tf.range(1,10)
  3. tf.print(tf.argmax(a))
  4. tf.print(tf.argmin(a))
  1. 8
  2. 0
  1. # Sort the elements in the tensor using tf.math.top_k
  2. a = tf.constant([1,3,7,5,4,8])
  3. values,indices = tf.math.top_k(a,3,sorted=True)
  4. tf.print(values)
  5. tf.print(indices)
  6. # tf.math.top_k is able to implement KNN algorithm in TensorFlow
  1. [8 7 5]
  2. [5 2 3]

3. Matrix Operation

Matrix must be two-dimensional. Something such as tf.constant([1,2,3]) is not a matrix.

Matrix operation includes matrix multiply, transpose, inverse, trace, norm, determinant, eigenvalue, decomposition, etc.

Most of the matrix operations are in the tf.linalg except for some popular operations.

  1. # Matrix multiplication
  2. a = tf.constant([[1,2],[3,4]])
  3. b = tf.constant([[2,0],[0,2]])
  4. a@b # Identical to tf.matmul(a,b)
  1. <tf.Tensor: shape=(2, 2), dtype=int32, numpy=
  2. array([[2, 4],
  3. [6, 8]], dtype=int32)>
  1. # Matrix transpose
  2. a = tf.constant([[1.0,2],[3,4]])
  3. tf.transpose(a)
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[1., 3.],
  3. [2., 4.]], dtype=float32)>
  1. # Matrix inverse, must be in type of tf.float32 or tf.double
  2. a = tf.constant([[1.0,2],[3.0,4]],dtype = tf.float32)
  3. tf.linalg.inv(a)
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[-2.0000002 , 1.0000001 ],
  3. [ 1.5000001 , -0.50000006]], dtype=float32)>
  1. # Matrix trace
  2. a = tf.constant([[1.0,2],[3,4]])
  3. tf.linalg.trace(a)
  1. <tf.Tensor: shape=(), dtype=float32, numpy=5.0>
  1. # Matrix norm
  2. a = tf.constant([[1.0,2],[3,4]])
  3. tf.linalg.norm(a)
  1. <tf.Tensor: shape=(), dtype=float32, numpy=5.477226>
  1. # Determinant
  2. a = tf.constant([[1.0,2],[3,4]])
  3. tf.linalg.det(a)
  1. <tf.Tensor: shape=(), dtype=float32, numpy=-2.0>
  1. # Eigenvalues
  2. tf.linalg.eigvalsh(a)
  1. <tf.Tensor: shape=(2,), dtype=float32, numpy=array([-0.8541021, 5.854102 ], dtype=float32)>
  1. # QR decomposition
  2. a = tf.constant([[1.0,2.0],[3.0,4.0]],dtype = tf.float32)
  3. q,r = tf.linalg.qr(a)
  4. tf.print(q)
  5. tf.print(r)
  6. tf.print(q@r)
  1. [[-0.316227794 -0.948683321]
  2. [-0.948683321 0.316227734]]
  3. [[-3.1622777 -4.4271884]
  4. [0 -0.632455349]]
  5. [[1.00000012 1.99999976]
  6. [3 4]]
  1. # SVD decomposition
  2. a = tf.constant([[1.0,2.0],[3.0,4.0]],dtype = tf.float32)
  3. v,s,d = tf.linalg.svd(a)
  4. tf.matmul(tf.matmul(s,tf.linalg.diag(v)),d)
  5. # SVD decomposition is used for dimension reduction in PCA
  1. <tf.Tensor: shape=(2, 2), dtype=float32, numpy=
  2. array([[0.9999996, 1.9999996],
  3. [2.9999998, 4. ]], dtype=float32)>

4. Broadcasting Mechanism

The rules of broadcasting in TensorFlow is the same as numpy:

    1. If two tensors are different in rank, expand the tensor with lower rank.
    1. If two tensors has the same length along certain dimension, or one of the tensors has length 1 along certain dimension, then these two tensors are compatible along this dimension.
    1. Two tensors that are compatible along all dimensions are able to broadcast.
    1. After broadcasting, the length of each dimension equals to the larger one among two tensors.
    1. When a tensor has length = 1 along any dimension while the length of corresponding dimension of the other tensor > 1, in the broadcast result, this only element is jusk like been duplicated along this dimension.

tf.broadcast_to expand the dimension of tensor explicitly.

  1. a = tf.constant([1,2,3])
  2. b = tf.constant([[0,0,0],[1,1,1],[2,2,2]])
  3. b + a # Identical to b + tf.broadcast_to(a,b.shape)
  1. <tf.Tensor: shape=(3, 3), dtype=int32, numpy=
  2. array([[1, 2, 3],
  3. [2, 3, 4],
  4. [3, 4, 5]], dtype=int32)>
  1. tf.broadcast_to(a,b.shape)
  1. <tf.Tensor: shape=(3, 3), dtype=int32, numpy=
  2. array([[1, 2, 3],
  3. [1, 2, 3],
  4. [1, 2, 3]], dtype=int32)>
  1. # Shape after broadcasting using static shape, requires arguments in TensorShape type
  2. tf.broadcast_static_shape(a.shape,b.shape)
  1. TensorShape([3, 3])
  1. # Shape after broadcasting using dynamic shape, requires arguments in Tensor type
  2. c = tf.constant([1,2,3])
  3. d = tf.constant([[1],[2],[3]])
  4. tf.broadcast_dynamic_shape(tf.shape(c),tf.shape(d))
  1. <tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 3], dtype=int32)>
  1. # Results of broadcasting
  2. c+d # Identical to tf.broadcast_to(c,[3,3]) + tf.broadcast_to(d,[3,3])
  1. <tf.Tensor: shape=(3, 3), dtype=int32, numpy=
  2. array([[2, 3, 4],
  3. [3, 4, 5],
  4. [4, 5, 6]], dtype=int32)>

Please leave comments in the WeChat official account “Python与算法之美” (Elegance of Python and Algorithms) if you want to communicate with the author about the content. The author will try best to reply given the limited time available.

You are also welcomed to join the group chat with the other readers through replying 加群 (join group) in the WeChat official account.

image.png