Look at data

Beginner’s tutorial, explains how to quickly look at your data or model predictions

Viewing inputs and outputs

In this tutorial, we’ll see how the same API allows you to get a look at the inputs and outputs of your model, whether in the vision, text or tabular application. We’ll go over a lot of different tasks and each time, grab some data in a DataBunch with the data block API, see how to get a look at a few inputs with the show_batch method, train an appropriate Learner then use the show_results method to see what the outputs of our model actually look like.

Note: As usual, this page is generated from a notebook that you can find in the docs_src folder of the fastai repo. The examples are all designed to run fast, which is why we use samples of the dataset, a resnet18 as a backbone and don’t train for very long. You can change all of those parameters to run your own experiments!

Vision

To quickly get access to all the vision functions inside fastai, we use the usual import statements.

  1. from fastai.vision import *

A classification problem

Let’s begin with our sample of the MNIST dataset.

  1. mnist = untar_data(URLs.MNIST_TINY)
  2. tfms = get_transforms(do_flip=False)

It’s set up with an imagenet structure so we use it to load our training and validation datasets, then label, transform, convert them into ImageDataBunch and finally, normalize them.

  1. data = (ImageList.from_folder(mnist)
  2. .split_by_folder()
  3. .label_from_folder()
  4. .transform(tfms, size=32)
  5. .databunch()
  6. .normalize(imagenet_stats))

Once your data is properly set up in a DataBunch, we can call data.show_batch() to see what a sample of a batch looks like.

  1. data.show_batch()

Look at data - 图1

Note that the images were automatically de-normalized before being showed with their labels (inferred from the names of the folder). We can specify a number of rows if the default of 5 is too big, and we can also limit the size of the figure.

  1. data.show_batch(rows=3, figsize=(4,4))

Look at data - 图2

Now let’s create a Learner object to train a classifier.

  1. learn = cnn_learner(data, models.resnet18, metrics=accuracy)
  2. learn.fit_one_cycle(1,1e-2)
  3. learn.save('mini_train')
epochtrain_lossvalid_lossaccuracytime
00.7799940.7441150.77968500:01

Our model has quickly reached around 91% accuracy, now let’s see its predictions on a sample of the validation set. For this, we use the show_results method.

  1. learn.show_results()

Look at data - 图3

Since the validation set is usually sorted, we get only images belonging to the same class. We can then again specify a number of rows, a figure size, but also the dataset on which we want to make predictions.

  1. learn.show_results(ds_type=DatasetType.Train, rows=4, figsize=(8,10))

Look at data - 图4

A multilabel problem

Now let’s try these on the planet dataset, which is a little bit different in the sense that each image can have multiple tags (and not just one label).

  1. planet = untar_data(URLs.PLANET_TINY)
  2. planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)

Here each images is labelled in a file named ‘labels.csv’. We have to add ‘train’ as a prefix to the filenames, ‘.jpg’ as a suffix and the labels are separated by spaces.

  1. data = (ImageList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
  2. .split_by_rand_pct()
  3. .label_from_df(label_delim=' ')
  4. .transform(planet_tfms, size=128)
  5. .databunch()
  6. .normalize(imagenet_stats))

And we can have look at our data with data.show_batch.

  1. data.show_batch(rows=2, figsize=(9,7))

Look at data - 图5

Then we can then create a Learner object pretty easily and train it for a little bit.

  1. learn = cnn_learner(data, models.resnet18)
  2. learn.fit_one_cycle(5,1e-2)
  3. learn.save('mini_train')
epochtrain_lossvalid_losstime
01.0248201.01453700:01
10.9485181.11461600:01
20.8879771.10974400:01
30.8398090.98348200:01
40.7947690.86991100:01

And to see actual predictions, we just have to run learn.show_results().

  1. learn.show_results(rows=3, figsize=(12,15))

Look at data - 图6

A regression example

For the next example, we are going to use the BIWI head pose dataset. On pictures of persons, we have to find the center of their face. For the fastai docs, we have built a small subsample of the dataset (200 images) and prepared a dictionary for the correspondance filename to center.

  1. biwi = untar_data(URLs.BIWI_SAMPLE)
  2. fn2ctr = pickle.load(open(biwi/'centers.pkl', 'rb'))

To grab our data, we use this dictionary to label our items. We also use the PointsItemList class to have the targets be of type ImagePoints (which will make sure the data augmentation is properly applied to them). When calling transform we make sure to set tfm_y=True.

  1. data = (PointsItemList.from_folder(biwi)
  2. .split_by_rand_pct(seed=42)
  3. .label_from_func(lambda o:fn2ctr[o.name])
  4. .transform(get_transforms(), tfm_y=True, size=(120,160))
  5. .databunch()
  6. .normalize(imagenet_stats))

Then we can have a first look at our data with data.show_batch().

  1. data.show_batch(rows=3, figsize=(9,6))

Look at data - 图7

We train our model for a little bit before using learn.show_results().

  1. learn = cnn_learner(data, models.resnet18, lin_ftrs=[100], ps=0.05)
  2. learn.fit_one_cycle(5, 5e-2)
  3. learn.save('mini_train')
epochtrain_lossvalid_losstime
02.439939161.10643000:01
14.10643764.89711000:01
23.45000210.02429900:01
32.68404720.76020100:01
42.18925811.58919300:01
  1. learn.show_results(rows=3)

Look at data - 图8

A segmentation example

Now we are going to look at the camvid dataset (at least a small sample of it), where we have to predict the class of each pixel in an image. Each image in the ‘images’ subfolder as an equivalent in ‘labels’ that is its segmentations mask.

  1. camvid = untar_data(URLs.CAMVID_TINY)
  2. path_lbl = camvid/'labels'
  3. path_img = camvid/'images'

We read the classes in ‘codes.txt’ and the function maps each image filename with its corresponding mask filename.

  1. codes = np.loadtxt(camvid/'codes.txt', dtype=str)
  2. get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'

The data block API allows us to uickly get everything in a DataBunch and then we can have a look with show_batch.

  1. data = (SegmentationItemList.from_folder(path_img)
  2. .split_by_rand_pct()
  3. .label_from_func(get_y_fn, classes=codes)
  4. .transform(get_transforms(), tfm_y=True, size=128)
  5. .databunch(bs=16, path=camvid)
  6. .normalize(imagenet_stats))
  1. data.show_batch(rows=2, figsize=(7,5))

Look at data - 图9

Then we train a Unet for a few epochs.

Warning: This training is fairly unstable, you should use more epochs and the full dataset to get better results.

  1. learn = unet_learner(data, models.resnet18)
  2. learn.fit_one_cycle(3,1e-2)
  3. learn.save('mini_train')
epochtrain_lossvalid_losstime
017.7647133.64657200:04
19.9902332.05299000:01
26.9742501.87265100:01
  1. learn.show_results()

Look at data - 图10

Text

Next application is text, so let’s start by importing everything we’ll need.

  1. from fastai.text import *

Language modelling

First we’ll fine-tune a pretrained language model on our subset of imdb.

  1. imdb = untar_data(URLs.IMDB_SAMPLE)
  1. data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
  2. .split_by_rand_pct()
  3. .label_for_lm()
  4. .databunch())
  5. data_lm.save()

data.show_batch() will work here as well. For a language model, it shows us the beginning of each sequence of text along the batch dimension (the target being to guess the next word).

  1. data_lm.show_batch()
idxtext
0! ! ! xxmaj finally this was directed by the guy who did xxmaj big xxmaj xxunk ? xxmaj must be a xxunk of xxmaj jonestown - hollywood style . xxmaj xxunk ! xxbos xxmaj every once in a long while a movie will come along that will be so awful that i feel compelled to warn people . xxmaj if i labor all my days and i can save
1a grand voyage for the audience as well as the two principals . xxmaj the imagery throughout is impressive , especially the final scenes in xxmaj xxunk . xxmaj it xxunk for me once again how much different the world can be , but also at the same time , how similar . xxmaj the same was true for the father and son in this film . n n
2xxunk between the xxunk — resulting in a xxup we vs. xxup they mentality . xxmaj later , an explosion causes a huge xxunk in the xxmaj french and the xxmaj xxunk refuse to sit back and do nothing . xxmaj xxunk their own lives , they prove that there is true xxunk between miners and men in general . n n xxmaj the film is a strong criticism
3put the camera man on roller xxunk and pushed him along . xxmaj the story ( if it can be called that ) is so full of holes it ‘s almost funny , xxmaj it never really explains why the hell he survived in the first place , or needs human flesh in order to survive . xxmaj the script is poorly written and the dialogue xxunk on just plane
4them and insults them because they play woods and blah blah blah xxmaj the phantom helps these xxunk kids out and trains them and all this crap , he gets them to play airball and basically xxunk all the xxunk including the “ xxunk “ . n n xxmaj so what exactly is wrong with the movie ? xxmaj well the budget is a huge thing , a paintball

Now let’s define a language model learner

  1. learn = language_model_learner(data_lm, AWD_LSTM)
  2. learn.fit_one_cycle(2, 1e-2)
  3. learn.save('mini_train_lm')
  4. learn.save_encoder('mini_train_encoder')
epochtrain_lossvalid_lossaccuracytime
04.3535773.7592970.29260400:04
14.0704643.7400160.29485100:04

Then we can have a look at the results. It shows a certain amount of words (default 20), then the next 20 target words and the ones that were predicted.

  1. learn.show_results()
texttargetpred
xxbos xxmaj this is one of those movies that ‘s difficult to review without giving away the plot . xxmajxxunk to say there are weird things and unexpected twists going on , beyond the initial xxunk “ xxmaj tomit is the that ‘s a things about n’t xxunk and on , but the xxunk xxunk of . xxunk
we are going to green light ! ! “ xxmaj and whoever that person is , should have his orher head examined for actual brain xxunk . xxmaj because whoever is responsible for actually xxunk out money to haveher xxunk xxunk . the xxunk xxunk . xxmaj the of ‘s xxunk for the xxunk the the , make
a society which is supposedly gone and yet somehow is still with us . xxbos xxmaj for those who liketheir murder xxunk busy , this is definitely the one to see , as it is xxunk full of interestingthis xxunk , , , xxmaj is a a best that watch the and xxmaj is a . of xxunk
awhile but not all of them get the treatment they deserve . xxmaj the nice supporting cast includes xxmaj xxunkxxmaj xxunk , at his best in a xxunk comic performance as a xxunk xxunk , xxmaj xxunk xxmaj xxunkxxmaj xxunk , xxmaj least best , the xxunk , book , xxmaj xxunk , , and xxunk xxmaj xxunk
n n xxmaj as such , when i first heard about the xxunk of a prequel series some monthsgot a sick feeling in my xxunk . i was afraid that the formula that made xxmaj xxunk so successfulwas a lot xxunk of the xxunk . xxmaj was not of i movie was i me xxunk xxmaj xxunk

Classification

Now let’s see a classification example. We have to use the same vocabulary as for the language model if we want to be able to use the encoder we saved.

  1. data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text', vocab=data_lm.vocab)
  2. .split_from_df(col='is_valid')
  3. .label_from_df(cols='label')
  4. .databunch(bs=42))

Here show_batch shows the beginning of each review with its target.

  1. data_clas.show_batch()
texttarget
xxbos xxmaj xxunk xxmaj victor xxmaj xxunk : a xxmaj review n n xxmaj you know , xxmaj xxunk xxmaj victor xxmaj xxunk is like sticking your hands into a big , xxunk xxunk of xxunk . xxmaj it ‘s warm and xxunk , but you ‘re not sure if it feels right . xxmaj try as i might , no matter how warm and xxunk xxmaj xxunk xxmajnegative
xxbos xxup the xxup shop xxup around xxup the xxup xxunk is one of the xxunk and most feel - good romantic comedies ever made . xxmaj there ‘s just no getting around that , and it ‘s hard to actually put one ‘s feeling for this film into words . xxmaj it ‘s not one of those films that tries too hard , nor does it come up withpositive
xxbos xxmaj now that xxmaj che(2008 ) has finished its relatively short xxmaj australian cinema run ( extremely limited xxunk screen in xxmaj xxunk , after xxunk ) , i can xxunk join both xxunk of “ xxmaj at xxmaj the xxmaj movies “ in taking xxmaj steven xxmaj soderbergh to task . n n xxmaj it ‘s usually satisfying to watch a film director change his style /negative
xxbos xxmaj this film sat on my xxmaj xxunk for weeks before i watched it . i xxunk a self - indulgent xxunk flick about relationships gone bad . i was wrong ; this was an xxunk xxunk into the xxunk - up xxunk of xxmaj new xxmaj xxunk . n n xxmaj the format is the same as xxmaj max xxmaj xxunk ‘ “ xxmaj la xxmaj xxunkpositive
xxbos xxmaj many xxunk that this is n’t just a classic due to the fact that it ‘s the first xxup 3d game , or even the first xxunk - up . xxmaj it ‘s also one of the first xxunk games , one of the xxunk definitely the first ) truly claustrophobic games , and just a pretty well - xxunk gaming experience in general . xxmaj with graphicspositive

And we can train a classifier that uses our previous encoder.

  1. learn = text_classifier_learner(data_clas, AWD_LSTM)
  2. learn.load_encoder('mini_train_encoder')
  3. learn.fit_one_cycle(2, slice(1e-3,1e-2))
  4. learn.save('mini_train_clas')
epochtrain_lossvalid_lossaccuracytime
00.6731650.6493760.67000000:04
10.6227130.6074530.70000000:04
  1. learn.show_results()
texttargetprediction
xxbos n n i ‘m sure things did n’t exactly go the same way in the real life of xxmaj homer xxmaj hickam as they did in the film adaptation of his book , xxmaj rocket xxmaj boys , but the movie “ xxmaj october xxmaj sky “ ( an xxunk of the book ‘s title ) is good enough to stand alone . i have not read xxmajpositivepositive
xxbos xxmaj to review this movie , i without any doubt would have to quote that memorable scene in xxmaj tarantino ‘s “ xxmaj pulp xxmaj fiction “ ( xxunk ) when xxmaj jules and xxmaj vincent are talking about xxmaj mia xxmaj wallace and what she does for a living . xxmaj jules tells xxmaj vincent that the “ xxmaj only thing she did worthwhile was pilot “ .negativepositive
xxbos xxmaj how viewers react to this new “ adaption “ of xxmaj shirley xxmaj jackson ‘s book , which was xxunk as xxup not being a remake of the original 1963 movie ( true enough ) , will be based , i suspect , on the following : those who were big fans of either the book or original movie are not going to think much of this onenegativenegative
xxbos xxmaj the trouble with the book , “ xxmaj memoirs of a xxmaj geisha “ is that it had xxmaj japanese xxunk but underneath the xxunk it was all an xxmaj american man ‘s way of thinking . xxmaj reading the book is like watching a magnificent ballet with great music , sets , and costumes yet performed by xxunk animals dressed in those xxunk far from xxmaj japanesenegativenegative
xxbos xxmaj bonanza had a great cast of wonderful actors . xxmaj xxunk xxmaj xxunk , xxmaj pernell xxmaj whitaker , xxmaj michael xxmaj xxunk , xxmaj dan xxmaj blocker , and even xxmaj guy xxmaj williams ( as the cousin who was brought in for several episodes during 1964 to replace xxmaj adam when he was leaving the series ) . xxmaj the cast had chemistry , and theypositivepositive

Tabular

Last application brings us to tabular data. First let’s import everything we’ll need.

  1. from fastai.tabular import *

We’ll use a sample of the adult dataset here. Once we read the csv file, we’ll need to specify the dependant variable, the categorical variables, the continuous variables and the processors we want to use.

  1. adult = untar_data(URLs.ADULT_SAMPLE)
  2. df = pd.read_csv(adult/'adult.csv')
  3. dep_var = 'salary'
  4. cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
  5. cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
  6. procs = [FillMissing, Categorify, Normalize]

Then we can use the data block API to grab everything together before using data.show_batch()

  1. data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
  2. .split_by_idx(valid_idx=range(800,1000))
  3. .label_from_df(cols=dep_var)
  4. .databunch())
  1. data.show_batch()
workclasseducationmarital-statusoccupationrelationshipracesexnative-countryeducation-num_naeducation-numhours-per-weekagecapital-lossfnlwgtcapital-gaintarget
PrivateSome-collegeMarried-spouse-absentAdm-clericalNot-in-familyWhiteFemaleUnited-StatesFalse-0.0312-1.97960.9831-0.21640.27331.7560>=50k
Private11thNever-marriedHandlers-cleanersOwn-childWhiteMaleUnited-StatesFalse-1.2046-0.8456-1.5823-0.21640.3955-0.1459<50k
PrivateHS-gradNever-marriedOther-serviceNot-in-familyBlackMale?False-0.4224-0.0356-0.6294-0.2164-0.4278-0.1459<50k
Local-govHS-gradMarried-civ-spouseCraft-repairHusbandWhiteMaleUnited-StatesFalse-0.4224-0.44061.0564-0.2164-0.62220.5311<50k
PrivateBachelorsSeparatedExec-managerialNot-in-familyBlackFemaleUnited-StatesFalse1.14220.3694-0.4095-0.21641.4279-0.1459<50k

Here we grab a tabular_learner that we train for a little bit.

  1. learn = tabular_learner(data, layers=[200,100], metrics=accuracy)
  2. learn.fit(5, 1e-2)
  3. learn.save('mini_train')
epochtrain_lossvalid_lossaccuracytime
00.3213810.3435580.84500000:06
10.3393660.3389620.84500000:05
20.3311680.3423570.84000000:05
30.3235530.3438080.85000000:05
40.3272180.3515250.83500000:05

And we can use learn.show_results().

  1. learn.show_results()
workclasseducationmarital-statusoccupationrelationshipracesexnative-countryeducation-num_naeducation-numhours-per-weekagecapital-lossfnlwgtcapital-gaintargetprediction
PrivateSome-collegeDivorcedHandlers-cleanersUnmarriedWhiteFemaleUnited-StatesTrue-0.0312-0.03560.4701-0.2164-0.8793-0.1459<50k<50k
Self-emp-incProf-schoolMarried-civ-spouseProf-specialtyHusbandWhiteMaleUnited-StatesTrue-0.03121.58430.5434-0.21640.02901.8829>=50k>=50k
PrivateAssoc-vocDivorced#na#Not-in-familyWhiteMaleUnited-StatesTrue-0.0312-0.1976-0.1896-0.21641.7704-0.1459<50k<50k
Federal-govBachelorsNever-marriedTech-supportNot-in-familyWhiteMaleUnited-StatesTrue-0.03120.3694-0.9959-0.2164-1.3242-0.1459<50k<50k
PrivateBachelorsMarried-civ-spouse#na#HusbandWhiteMaleUnited-StatesTrue-0.0312-0.0356-0.1163-0.2164-0.2389-0.1459<50k<50k

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Jan 5, 2021