基于故事和问题训练两个循环神经网络。

两者的合并向量将用于回答一系列 bAbI 任务。

这些结果与 Weston 等人提供的 LSTM 模型的结果相当:Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

Task NumberFB LSTM BaselineKeras QA
QA1 - Single Supporting Fact5052.1
QA2 - Two Supporting Facts2037.0
QA3 - Three Supporting Facts2020.5
QA4 - Two Arg. Relations6162.9
QA5 - Three Arg. Relations7061.9
QA6 - yes/No Questions4850.7
QA7 - Counting4978.9
QA8 - Lists/Sets4577.2
QA9 - Simple Negation6464.0
QA10 - Indefinite Knowledge4447.7
QA11 - Basic Coreference7274.9
QA12 - Conjunction7476.4
QA13 - Compound Coreference9494.4
QA14 - Time Reasoning2734.8
QA15 - Basic Deduction2132.4
QA16 - Basic Induction2350.6
QA17 - Positional Reasoning5149.1
QA18 - Size Reasoning5290.8
QA19 - Path Finding89.0
QA20 - Agent's Motivations9190.7

有关 bAbI 项目的相关资源,请参考: https://research.facebook.com/researchers/1543934539189348

注意

  • 使用默认的单词、句子和查询向量尺寸,GRU 模型得到了以下效果:
  • 20 轮迭代后,在 QA1 上达到了 52.1% 的测试准确率(在 CPU 上每轮迭代 2 秒);
  • 20 轮迭代后,在 QA2 上达到了 37.0% 的测试准确率(在 CPU 上每轮迭代 16 秒)。

相比之下,Facebook的论文中 LSTM baseline 的准确率分别是 50% 和 20%。

  • 这个任务并不是笼统地单独去解析问题。这应该可以提高准确率,且是合并两个 RNN 的一次较好实践。

  • 故事和问题的 RNN 之间不共享词向量(词嵌入)。

  • 注意观察 1000 个训练样本(en-10k)到 10,000 个的准确度如何变化。使用 1000 是为了与原始论文进行对比。

  • 尝试使用 GRU, LSTM 和 JZS1-3,因为它们会产生微妙的不同结果。

  • 长度和噪声(即「无用」的故事内容)会影响 LSTM/GRU 提供正确答案的能力。在只提供事实的情况下,这些 RNN可以在许多任务上达到 100% 的准确性。 使用注意力过程的记忆网络和神经网络可以有效地搜索这些噪声以找到相关的语句,从而大大提高性能。这在 QA2 和 QA3 上变得尤为明显,两者都远远显著于 QA1。

  1. from __future__ import print_function
  2. from functools import reduce
  3. import re
  4. import tarfile
  5. import numpy as np
  6. from keras.utils.data_utils import get_file
  7. from keras.layers.embeddings import Embedding
  8. from keras import layers
  9. from keras.layers import recurrent
  10. from keras.models import Model
  11. from keras.preprocessing.sequence import pad_sequences
  12. def tokenize(sent):
  13. '''返回包含标点符号的句子的标记。
  14. >>> tokenize('Bob dropped the apple. Where is the apple?')
  15. ['Bob', 'dropped', 'the', 'apple', '.', 'Where', 'is', 'the', 'apple', '?']
  16. '''
  17. return [x.strip() for x in re.split(r'(\W+)?', sent) if x.strip()]
  18. def parse_stories(lines, only_supporting=False):
  19. '''解析 bAbi 任务格式中提供的故事
  20. 如果 only_supporting 为 true,
  21. 则只保留支持答案的句子。
  22. '''
  23. data = []
  24. story = []
  25. for line in lines:
  26. line = line.decode('utf-8').strip()
  27. nid, line = line.split(' ', 1)
  28. nid = int(nid)
  29. if nid == 1:
  30. story = []
  31. if '\t' in line:
  32. q, a, supporting = line.split('\t')
  33. q = tokenize(q)
  34. if only_supporting:
  35. # 只选择相关的子故事
  36. supporting = map(int, supporting.split())
  37. substory = [story[i - 1] for i in supporting]
  38. else:
  39. # 提供所有子故事
  40. substory = [x for x in story if x]
  41. data.append((substory, q, a))
  42. story.append('')
  43. else:
  44. sent = tokenize(line)
  45. story.append(sent)
  46. return data
  47. def get_stories(f, only_supporting=False, max_length=None):
  48. '''给定文件名,读取文件,检索故事,
  49. 然后将句子转换为一个独立故事。
  50. 如果提供了 max_length,
  51. 任何长于 max_length 的故事都将被丢弃。
  52. '''
  53. data = parse_stories(f.readlines(), only_supporting=only_supporting)
  54. flatten = lambda data: reduce(lambda x, y: x + y, data)
  55. data = [(flatten(story), q, answer) for story, q, answer in data
  56. if not max_length or len(flatten(story)) < max_length]
  57. return data
  58. def vectorize_stories(data, word_idx, story_maxlen, query_maxlen):
  59. xs = []
  60. xqs = []
  61. ys = []
  62. for story, query, answer in data:
  63. x = [word_idx[w] for w in story]
  64. xq = [word_idx[w] for w in query]
  65. # let's not forget that index 0 is reserved
  66. y = np.zeros(len(word_idx) + 1)
  67. y[word_idx[answer]] = 1
  68. xs.append(x)
  69. xqs.append(xq)
  70. ys.append(y)
  71. return (pad_sequences(xs, maxlen=story_maxlen),
  72. pad_sequences(xqs, maxlen=query_maxlen), np.array(ys))
  73. RNN = recurrent.LSTM
  74. EMBED_HIDDEN_SIZE = 50
  75. SENT_HIDDEN_SIZE = 100
  76. QUERY_HIDDEN_SIZE = 100
  77. BATCH_SIZE = 32
  78. EPOCHS = 20
  79. print('RNN / Embed / Sent / Query = {}, {}, {}, {}'.format(RNN,
  80. EMBED_HIDDEN_SIZE,
  81. SENT_HIDDEN_SIZE,
  82. QUERY_HIDDEN_SIZE))
  83. try:
  84. path = get_file('babi-tasks-v1-2.tar.gz',
  85. origin='https://s3.amazonaws.com/text-datasets/'
  86. 'babi_tasks_1-20_v1-2.tar.gz')
  87. except:
  88. print('Error downloading dataset, please download it manually:\n'
  89. '$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2'
  90. '.tar.gz\n'
  91. '$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')
  92. raise
  93. # 默认 QA1 任务,1000 样本
  94. # challenge = 'tasks_1-20_v1-2/en/qa1_single-supporting-fact_{}.txt'
  95. # QA1 任务,10,000 样本
  96. # challenge = 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt'
  97. # QA2 任务,1000 样本
  98. challenge = 'tasks_1-20_v1-2/en/qa2_two-supporting-facts_{}.txt'
  99. # QA2 任务,10,000 样本
  100. # challenge = 'tasks_1-20_v1-2/en-10k/qa2_two-supporting-facts_{}.txt'
  101. with tarfile.open(path) as tar:
  102. train = get_stories(tar.extractfile(challenge.format('train')))
  103. test = get_stories(tar.extractfile(challenge.format('test')))
  104. vocab = set()
  105. for story, q, answer in train + test:
  106. vocab |= set(story + q + [answer])
  107. vocab = sorted(vocab)
  108. # 保留 0 以留作 pad_sequences 进行 masking
  109. vocab_size = len(vocab) + 1
  110. word_idx = dict((c, i + 1) for i, c in enumerate(vocab))
  111. story_maxlen = max(map(len, (x for x, _, _ in train + test)))
  112. query_maxlen = max(map(len, (x for _, x, _ in train + test)))
  113. x, xq, y = vectorize_stories(train, word_idx, story_maxlen, query_maxlen)
  114. tx, txq, ty = vectorize_stories(test, word_idx, story_maxlen, query_maxlen)
  115. print('vocab = {}'.format(vocab))
  116. print('x.shape = {}'.format(x.shape))
  117. print('xq.shape = {}'.format(xq.shape))
  118. print('y.shape = {}'.format(y.shape))
  119. print('story_maxlen, query_maxlen = {}, {}'.format(story_maxlen, query_maxlen))
  120. print('Build model...')
  121. sentence = layers.Input(shape=(story_maxlen,), dtype='int32')
  122. encoded_sentence = layers.Embedding(vocab_size, EMBED_HIDDEN_SIZE)(sentence)
  123. encoded_sentence = RNN(SENT_HIDDEN_SIZE)(encoded_sentence)
  124. question = layers.Input(shape=(query_maxlen,), dtype='int32')
  125. encoded_question = layers.Embedding(vocab_size, EMBED_HIDDEN_SIZE)(question)
  126. encoded_question = RNN(QUERY_HIDDEN_SIZE)(encoded_question)
  127. merged = layers.concatenate([encoded_sentence, encoded_question])
  128. preds = layers.Dense(vocab_size, activation='softmax')(merged)
  129. model = Model([sentence, question], preds)
  130. model.compile(optimizer='adam',
  131. loss='categorical_crossentropy',
  132. metrics=['accuracy'])
  133. print('Training')
  134. model.fit([x, xq], y,
  135. batch_size=BATCH_SIZE,
  136. epochs=EPOCHS,
  137. validation_split=0.05)
  138. print('Evaluation')
  139. loss, acc = model.evaluate([tx, txq], ty,
  140. batch_size=BATCH_SIZE)
  141. print('Test loss / test accuracy = {:.4f} / {:.4f}'.format(loss, acc))