使用卷积神经网络进行图像分类

作者: PaddlePaddle

日期: 2021.01

摘要: 本示例教程将会演示如何使用飞桨的卷积神经网络来完成图像分类任务。这是一个较为简单的示例,将会使用一个由三个卷积层组成的网络完成cifar10数据集的图像分类任务。

一、环境配置

本教程基于Paddle 2.0 编写,如果您的环境不是本版本,请先参考官网安装 Paddle 2.0 。

  1. import paddle
  2. import paddle.nn.functional as F
  3. from paddle.vision.transforms import ToTensor
  4. import numpy as np
  5. import matplotlib.pyplot as plt
  6. print(paddle.__version__)
  1. 2.0.0

二、加载数据集

我们将会使用飞桨提供的API完成数据集的下载并为后续的训练任务准备好数据迭代器。cifar10数据集由60000张大小为32 * 32的彩色图片组成,其中有50000张图片组成了训练集,另外10000张图片组成了测试集。这些图片分为10个类别,我们的任务是训练一个模型能够把图片进行正确的分类。

  1. transform = ToTensor()
  2. cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
  3. transform=transform)
  4. cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
  5. transform=transform)

三、组建网络

接下来我们使用飞桨定义一个使用了三个二维卷积( Conv2D ) 且每次卷积之后使用 relu 激活函数,两个二维池化层( MaxPool2D ),和两个线性变换层组成的分类网络,来把一个(32, 32, 3)形状的图片通过卷积神经网络映射为10个输出,这对应着10个分类的类别。

  1. class MyNet(paddle.nn.Layer):
  2. def __init__(self, num_classes=1):
  3. super(MyNet, self).__init__()
  4. self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3))
  5. self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
  6. self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3))
  7. self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
  8. self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3))
  9. self.flatten = paddle.nn.Flatten()
  10. self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64)
  11. self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)
  12. def forward(self, x):
  13. x = self.conv1(x)
  14. x = F.relu(x)
  15. x = self.pool1(x)
  16. x = self.conv2(x)
  17. x = F.relu(x)
  18. x = self.pool2(x)
  19. x = self.conv3(x)
  20. x = F.relu(x)
  21. x = self.flatten(x)
  22. x = self.linear1(x)
  23. x = F.relu(x)
  24. x = self.linear2(x)
  25. return x

四、模型训练&预测

接下来,我们用一个循环来进行模型的训练,我们将会: - 使用 paddle.optimizer.Adam 优化器来进行优化。 - 使用 F.cross_entropy 来计算损失值。 - 使用 paddle.io.DataLoader 来加载数据并组建batch。

  1. epoch_num = 10
  2. batch_size = 32
  3. learning_rate = 0.001
  1. val_acc_history = []
  2. val_loss_history = []
  3. def train(model):
  4. print('start training ... ')
  5. # turn into training mode
  6. model.train()
  7. opt = paddle.optimizer.Adam(learning_rate=learning_rate,
  8. parameters=model.parameters())
  9. train_loader = paddle.io.DataLoader(cifar10_train,
  10. shuffle=True,
  11. batch_size=batch_size)
  12. valid_loader = paddle.io.DataLoader(cifar10_test, batch_size=batch_size)
  13. for epoch in range(epoch_num):
  14. for batch_id, data in enumerate(train_loader()):
  15. x_data = data[0]
  16. y_data = paddle.to_tensor(data[1])
  17. y_data = paddle.unsqueeze(y_data, 1)
  18. logits = model(x_data)
  19. loss = F.cross_entropy(logits, y_data)
  20. if batch_id % 1000 == 0:
  21. print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, loss.numpy()))
  22. loss.backward()
  23. opt.step()
  24. opt.clear_grad()
  25. # evaluate model after one epoch
  26. model.eval()
  27. accuracies = []
  28. losses = []
  29. for batch_id, data in enumerate(valid_loader()):
  30. x_data = data[0]
  31. y_data = paddle.to_tensor(data[1])
  32. y_data = paddle.unsqueeze(y_data, 1)
  33. logits = model(x_data)
  34. loss = F.cross_entropy(logits, y_data)
  35. acc = paddle.metric.accuracy(logits, y_data)
  36. accuracies.append(acc.numpy())
  37. losses.append(loss.numpy())
  38. avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
  39. print("[validation] accuracy/loss: {}/{}".format(avg_acc, avg_loss))
  40. val_acc_history.append(avg_acc)
  41. val_loss_history.append(avg_loss)
  42. model.train()
  43. model = MyNet(num_classes=10)
  44. train(model)
  1. start training ...
  2. epoch: 0, batch_id: 0, loss is: [2.2362428]
  3. epoch: 0, batch_id: 1000, loss is: [1.206327]
  4. [validation] accuracy/loss: 0.5383386611938477/1.2577064037322998
  5. epoch: 1, batch_id: 0, loss is: [1.370784]
  6. epoch: 1, batch_id: 1000, loss is: [1.0781252]
  7. [validation] accuracy/loss: 0.6376796960830688/1.0298848152160645
  8. epoch: 2, batch_id: 0, loss is: [0.9192907]
  9. epoch: 2, batch_id: 1000, loss is: [0.7311921]
  10. [validation] accuracy/loss: 0.6576477885246277/0.9908456802368164
  11. epoch: 3, batch_id: 0, loss is: [0.61424184]
  12. epoch: 3, batch_id: 1000, loss is: [0.8268999]
  13. [validation] accuracy/loss: 0.6778154969215393/0.9368402361869812
  14. epoch: 4, batch_id: 0, loss is: [0.8788361]
  15. epoch: 4, batch_id: 1000, loss is: [1.139102]
  16. [validation] accuracy/loss: 0.7055711150169373/0.8624006509780884
  17. epoch: 5, batch_id: 0, loss is: [0.4790781]
  18. epoch: 5, batch_id: 1000, loss is: [0.46481135]
  19. [validation] accuracy/loss: 0.7040734887123108/0.8620880246162415
  20. epoch: 6, batch_id: 0, loss is: [0.8061414]
  21. epoch: 6, batch_id: 1000, loss is: [0.8912587]
  22. [validation] accuracy/loss: 0.7112619876861572/0.8590201139450073
  23. epoch: 7, batch_id: 0, loss is: [0.5126707]
  24. epoch: 7, batch_id: 1000, loss is: [0.70433134]
  25. [validation] accuracy/loss: 0.7098641991615295/0.8762255907058716
  26. epoch: 8, batch_id: 0, loss is: [0.70113385]
  27. epoch: 8, batch_id: 1000, loss is: [0.58052105]
  28. [validation] accuracy/loss: 0.7064696550369263/0.9035584330558777
  29. epoch: 9, batch_id: 0, loss is: [0.34707433]
  30. epoch: 9, batch_id: 1000, loss is: [0.59680617]
  31. [validation] accuracy/loss: 0.7041733264923096/0.945155143737793
  1. plt.plot(val_acc_history, label = 'validation accuracy')
  2. plt.xlabel('Epoch')
  3. plt.ylabel('Accuracy')
  4. plt.ylim([0.5, 0.8])
  5. plt.legend(loc='lower right')
  1. <matplotlib.legend.Legend at 0x12c3686d0>

../../../_images/convnet_image_classification_10_1.png

The End

从上面的示例可以看到,在cifar10数据集上,使用简单的卷积神经网络,用飞桨可以达到70%以上的准确率。你也可以通过调整网络结构和参数,达到更好的效果。