Tensorboard
Integration with tensorboard
/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
First thing first, you need to install tensorboard with
pip install tensorboard
Then launch tensorboard with
tensorboard --logdir=runs
in your terminal. You can change the logdir as long as it matches the log_dir
you pass to TensorBoardCallback
(default is runs
in the working directory).
Tensorboard Embedding Projector support
Tensorboard Embedding Projector is currently only supported for image classification
Export Image Features during Training
Tensorboard Embedding Projector is supported in TensorBoardCallback
(set parameter projector=True
) during training. The validation set embeddings will be written after each epoch.
cbs = [TensorBoardCallback(projector=True)]
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.fit_one_cycle(3, cbs=cbs)
Export Image Features during Inference
To write the embeddings for a custom dataset (e. g. after loading a learner) use TensorBoardProjectorCallback
. Add the callback manually to the learner.
learn = load_learner('path/to/export.pkl')
learn.add_cb(TensorBoardProjectorCallback())
dl = learn.dls.test_dl(files, with_labels=True)
_ = learn.get_preds(dl=dl)
If using a custom model (non fastai-resnet) pass the layer where the embeddings should be extracted as a callback-parameter.
layer = learn.model[1][1]
cbs = [TensorBoardProjectorCallback(layer=layer)]
preds = learn.get_preds(dl=dl, cbs=cbs)
Export Word Embeddings from Language Models
To export word embeddings from Language Models (tested with AWD_LSTM (fast.ai) and GPT2 / BERT (transformers)) but works with every model that contains an embedding layer.
For a fast.ai TextLearner or LMLearner just pass the learner - the embedding layer and vocab will be extracted automatically:
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
projector_word_embeddings(learn=learn, limit=2000, start=2000)
For other language models - like the ones in the transformers library - you’ll have to pass the layer and vocab. Here’s an example for a BERT model.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# get the word embedding layer
layer = model.embeddings.word_embeddings
# get and sort vocab
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
# write the embeddings for tb projector
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, start=2000)
class
TensorBoardBaseCallback
[source]
TensorBoardBaseCallback
() ::Callback
Basic class handling tweaks of the training loop by changing a Learner
in various events
class
TensorBoardCallback
[source]
TensorBoardCallback
(log_dir
=None
,trace_model
=True
,log_preds
=True
,n_preds
=9
,projector
=False
,layer
=None
) ::TensorBoardBaseCallback
Saves model topology, losses & metrics for tensorboard and tensorboard projector during training
class
TensorBoardProjectorCallback
[source]
TensorBoardProjectorCallback
(log_dir
=None
,layer
=None
) ::TensorBoardBaseCallback
Extracts and exports image featuers for tensorboard projector during inference
projector_word_embeddings
[source]
projector_word_embeddings
(learn
=None
,layer
=None
,vocab
=None
,limit
=-1
,start
=0
,log_dir
=None
)
Extracts and exports word embeddings from language models embedding layers
TensorBoardCallback
from fastai.vision.all import Resize, RandomSubsetSplitter, aug_transforms, cnn_learner, resnet18
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=TensorBoardCallback(Path.home()/'tmp'/'runs'/'tb', trace_model=True))
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 4.973294 | 5.009670 | 0.082192 | 00:03 |
1 | 4.382769 | 4.438282 | 0.095890 | 00:03 |
2 | 3.877172 | 3.665855 | 0.178082 | 00:04 |
Projector
Projector in TensorBoardCallback
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.05, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
cbs = [TensorBoardCallback(log_dir=Path.home()/'tmp'/'runs'/'vision1', projector=True)]
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
learn.fit_one_cycle(3, cbs=cbs)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 5.143322 | 6.736727 | 0.082192 | 00:03 |
1 | 4.508100 | 5.106580 | 0.109589 | 00:03 |
2 | 4.057889 | 4.194602 | 0.068493 | 00:03 |
TensorBoardProjectorCallback
path = untar_data(URLs.PETS)
db = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
item_tfms=Resize(128),
splitter=RandomSubsetSplitter(train_sz=0.1, valid_sz=0.01),
batch_tfms=aug_transforms(size=64),
get_y=using_attr(RegexLabeller(r'(.+)_d+.*$'), 'name'))
dls = db.dataloaders(path/'images')
files = get_image_files(path/'images')
files = files[:256]
dl = learn.dls.test_dl(files, with_labels=True)
learn = cnn_learner(dls, resnet18, metrics=accuracy)
layer = learn.model[1][0].ap
cbs = [TensorBoardProjectorCallback(layer=layer, log_dir=Path.home()/'tmp'/'runs'/'vision2')]
_ = learn.get_preds(dl=dl, cbs=cbs)
projector_word_embeddings
fastai text or lm learner
from fastai.text.all import TextDataLoaders, text_classifier_learner, AWD_LSTM
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
projector_word_embeddings(learn, limit=1000, log_dir=Path.home()/'tmp'/'runs'/'text')
transformers
GPT2
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
layer = model.transformer.wte
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, log_dir=Path.home()/'tmp'/'runs'/'transformers')
BERT
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
layer = model.embeddings.word_embeddings
vocab_dict = tokenizer.get_vocab()
vocab = [k for k, v in sorted(vocab_dict.items(), key=lambda x: x[1])]
projector_word_embeddings(layer=layer, vocab=vocab, limit=2000, start=2000, log_dir=Path.home()/'tmp'/'runs'/'transformers')
warning: Embedding dir exists, did you set global_step for add_embedding()?
Validate results in tensorboard
Run the following command in the command line to check if the projector embeddings have been correctly wirtten:
tensorboard --logdir=~/tmp/runs
Open http://localhost:6006 in browser (TensorBoard Projector doesn’t work correctly in Safari!)
©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021