TranslatedLayer

class paddle.jit. TranslatedLayer ( programs, persistable_vars ) [源代码]

TranslatedLayer 是一个命令式编程模式 Layer 的继承类, 通过 load 载入构建。能够像一般 Layer 一样在train或者eval模式下使用。

注解

TranslatedLayer 对象不能够通过构造函数创建,仅能够通过 load 接口载入构建。

示例代码:

  1. import numpy as np
  2. import paddle
  3. import paddle.nn as nn
  4. import paddle.optimizer as opt
  5. BATCH_SIZE = 16
  6. BATCH_NUM = 4
  7. EPOCH_NUM = 4
  8. IMAGE_SIZE = 784
  9. CLASS_NUM = 10
  10. # define a random dataset
  11. class RandomDataset(paddle.io.Dataset):
  12. def __init__(self, num_samples):
  13. self.num_samples = num_samples
  14. def __getitem__(self, idx):
  15. image = np.random.random([IMAGE_SIZE]).astype('float32')
  16. label = np.random.randint(0, CLASS_NUM - 1, (1, )).astype('int64')
  17. return image, label
  18. def __len__(self):
  19. return self.num_samples
  20. class LinearNet(nn.Layer):
  21. def __init__(self):
  22. super(LinearNet, self).__init__()
  23. self._linear = nn.Linear(IMAGE_SIZE, CLASS_NUM)
  24. @paddle.jit.to_static
  25. def forward(self, x):
  26. return self._linear(x)
  27. def train(layer, loader, loss_fn, opt):
  28. for epoch_id in range(EPOCH_NUM):
  29. for batch_id, (image, label) in enumerate(loader()):
  30. out = layer(image)
  31. loss = loss_fn(out, label)
  32. loss.backward()
  33. opt.step()
  34. opt.clear_grad()
  35. print("Epoch {} batch {}: loss = {}".format(
  36. epoch_id, batch_id, np.mean(loss.numpy())))
  37. # 1. train & save model.
  38. # create network
  39. layer = LinearNet()
  40. loss_fn = nn.CrossEntropyLoss()
  41. adam = opt.Adam(learning_rate=0.001, parameters=layer.parameters())
  42. # create data loader
  43. dataset = RandomDataset(BATCH_NUM * BATCH_SIZE)
  44. loader = paddle.io.DataLoader(dataset,
  45. batch_size=BATCH_SIZE,
  46. shuffle=True,
  47. drop_last=True,
  48. num_workers=2)
  49. # train
  50. train(layer, loader, loss_fn, adam)
  51. # save
  52. model_path = "linear.example.model"
  53. paddle.jit.save(layer, model_path)
  54. # 2. load model as TranslatedLayer
  55. # load
  56. translated_layer = paddle.jit.load(model_path)
  57. # inference
  58. translated_layer.eval()
  59. x = paddle.randn([1, IMAGE_SIZE], 'float32')
  60. pred = translated_layer(x)
  61. # fine-tune
  62. translated_layer.train()
  63. adam = opt.Adam(learning_rate=0.001, parameters=translated_layer.parameters())
  64. train(translated_layer, loader, loss_fn, adam)

program(method_name=’forward’):

获取TranslatedLayer中指定方法对应的Program。

参数:

  • method_name (string) - 要获取的Porgram对应的方法名。默认值为”forward”。

返回:Program

返回类型:Program

示例代码:

  1. import numpy as np
  2. import paddle
  3. import paddle.nn as nn
  4. import paddle.optimizer as opt
  5. BATCH_SIZE = 16
  6. BATCH_NUM = 4
  7. EPOCH_NUM = 4
  8. IMAGE_SIZE = 784
  9. CLASS_NUM = 10
  10. # define a random dataset
  11. class RandomDataset(paddle.io.Dataset):
  12. def __init__(self, num_samples):
  13. self.num_samples = num_samples
  14. def __getitem__(self, idx):
  15. image = np.random.random([IMAGE_SIZE]).astype('float32')
  16. label = np.random.randint(0, CLASS_NUM - 1, (1, )).astype('int64')
  17. return image, label
  18. def __len__(self):
  19. return self.num_samples
  20. class LinearNet(nn.Layer):
  21. def __init__(self):
  22. super(LinearNet, self).__init__()
  23. self._linear = nn.Linear(IMAGE_SIZE, CLASS_NUM)
  24. @paddle.jit.to_static
  25. def forward(self, x):
  26. return self._linear(x)
  27. def train(layer, loader, loss_fn, opt):
  28. for epoch_id in range(EPOCH_NUM):
  29. for batch_id, (image, label) in enumerate(loader()):
  30. out = layer(image)
  31. loss = loss_fn(out, label)
  32. loss.backward()
  33. opt.step()
  34. opt.clear_grad()
  35. print("Epoch {} batch {}: loss = {}".format(
  36. epoch_id, batch_id, np.mean(loss.numpy())))
  37. # create network
  38. layer = LinearNet()
  39. loss_fn = nn.CrossEntropyLoss()
  40. adam = opt.Adam(learning_rate=0.001, parameters=layer.parameters())
  41. # create data loader
  42. dataset = RandomDataset(BATCH_NUM * BATCH_SIZE)
  43. loader = paddle.io.DataLoader(dataset,
  44. batch_size=BATCH_SIZE,
  45. shuffle=True,
  46. drop_last=True,
  47. num_workers=2)
  48. # train
  49. train(layer, loader, loss_fn, adam)
  50. # save
  51. model_path = "linear.example.model"
  52. paddle.jit.save(layer, model_path)
  53. # load
  54. translated_layer = paddle.jit.load(model_path)
  55. # get program
  56. program = translated_layer.program()
  57. print(program)