widgets.image_cleaner

Image Cleaner Widget

fastai offers several widgets to support the workflow of a deep learning practitioner. The purpose of the widgets are to help you organize, clean, and prepare your data for your model. Widgets are separated by data type.

  1. path = untar_data(URLs.MNIST_SAMPLE)
  2. data = ImageDataBunch.from_folder(path)
  1. learn = cnn_learner(data, models.resnet18, metrics=error_rate)
  1. learn.fit_one_cycle(2)
epochtrain_lossvalid_losserror_ratetime
00.2330590.1153090.03385700:10
10.1105430.0852490.02845900:10
  1. learn.save('stage-1')

We create a databunch with all the data in the training set and no validation set (DatasetFormatter uses only the training set)

  1. db = (ImageList.from_folder(path)
  2. .split_none()
  3. .label_from_folder()
  4. .databunch())
  1. learn = cnn_learner(db, models.resnet18, metrics=[accuracy])
  2. learn.load('stage-1');

class DatasetFormatter[source][test]

DatasetFormatter() Tests found for DatasetFormatter:

Some other tests where DatasetFormatter is used:

  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_with_data_from_csv [source]

To run tests please refer to this guide.

Returns a dataset with the appropriate format and file indices to be displayed.

The DatasetFormatter class prepares your image dataset for widgets by returning a formatted DatasetTfm based on the DatasetType specified. Use from_toplosses to grab the most problematic images directly from your learner. Optionally, you can restrict the formatted dataset returned to n_imgs.

from_similars[source][test]

from_similars(learn, layer_ls:list=[0, 7, 2], **kwargs) Tests found for from_similars:

Some other tests where from_similars is used:

  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_with_data_from_csv [source]

To run tests please refer to this guide.

Gets the indices for the most similar images.

from_toplosses[source][test]

from_toplosses(learn, n_imgs=None, **kwargs) No tests found for from_toplosses. To contribute a test please refer to this guide and this discussion.

Gets indices with top losses.

from_most_unsure[source][test]

from_most_unsure(learn:Learner, num=50) → Tuple[DataLoader, List[int], Sequence[str], List[str]] No tests found for from_most_unsure. To contribute a test please refer to this guide and this discussion.

Gets num items from the test set, for which the difference in probabilities between the most probable and second most probable classes is minimal.

class ImageCleaner[source][test]

ImageCleaner(dataset:LabelLists, fns_idxs:Collection[int], path:PathOrStr, batch_size=5, duplicates=False) :: BasicImageWidget Tests found for ImageCleaner:

  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_index_length_mismatch [source]
  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_length_correct [source]
  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_with_data_from_csv [source]
  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_cleaner_wrong_input_type [source]

To run tests please refer to this guide.

Displays images for relabeling or deletion and saves changes in path as ‘cleaned.csv’.

ImageCleaner is for cleaning up images that don’t belong in your dataset. It renders images in a row and gives you the opportunity to delete the file from your file system. To use ImageCleaner we must first use DatasetFormatter().from_toplosses to get the suggested indices for misclassified images.

  1. ds, idxs = DatasetFormatter().from_toplosses(learn)
  1. ImageCleaner(ds, idxs, path)
  1. <fastai.widgets.image_cleaner.ImageCleaner at 0x7f45bd324240>

ImageCleaner does not change anything on disk (neither labels or existence of images). Instead, it creates a ‘cleaned.csv’ file in your data path from which you need to load your new databunch for the files to changes to be applied.

  1. df = pd.read_csv(path/'cleaned.csv', header='infer')
  1. # We create a databunch from our csv. We include the data in the training set and we don't use a validation set (DatasetFormatter uses only the training set)
  2. np.random.seed(42)
  3. db = (ImageList.from_df(df, path)
  4. .split_none()
  5. .label_from_df()
  6. .databunch(bs=64))
  1. learn = cnn_learner(db, models.resnet18, metrics=error_rate)
  2. learn = learn.load('stage-1')

You can then use ImageCleaner again to find duplicates in the dataset. To do this, you can specify duplicates=True while calling ImageCleaner after getting the indices and dataset from .from_similars. Note that if you are using a layer’s output which has dimensions (n_batches, n_features, 1, 1) then you don’t need any pooling (this is the case with the last layer). The suggested use of .from_similars() with resnets is using the last layer and no pooling, like in the following cell.

  1. ds, idxs = DatasetFormatter().from_similars(learn, layer_ls=[0,7,1], pool=None)
  1. Getting activations...

100.00% [226/226 00:02<00:00]

  1. Computing similarities...
  1. ImageCleaner(ds, idxs, path, duplicates=True)
  1. <fastai.widgets.image_cleaner.ImageCleaner at 0x7f236f7214a8>

class PredictionsCorrector[source][test]

PredictionsCorrector(dataset:LabelLists, fns_idxs:Collection[int], classes:Sequence[str], labels:Sequence[str], batch_size:int=5) :: BasicImageWidget No tests found for PredictionsCorrector. To contribute a test please refer to this guide and this discussion.

Displays images for manual inspection and relabelling.

In competitions, you need to provide predictions for the test set, for which true labels are unknown. You can slightly improve your score by manually correcting the most egregious misclassifications. Test set in competitions is usually large and cannot be inspected manually in its entirety. A good subset to look at is the images the model is the least sure about, i.e. the difference in probabilities between the most probable and the second most probable classes is minimal.

Let’s start by creating an artificial test set and training a model:

  1. db_test = ImageDataBunch.from_folder(path/'train', valid_pct = 0.2, test = '../valid')
  2. learn = cnn_learner(db_test, models.resnet18, metrics=[accuracy])
  3. learn.fit_one_cycle(2)
epochtrain_lossvalid_lossaccuracytime
00.2453690.1384950.95361002:14
10.1331830.1038010.96369502:16

Now we can take a look at the most unsure images:

  1. most_unsure = DatasetFormatter.from_most_unsure(learn)
  2. wgt = PredictionsCorrector(*most_unsure)
  1. 'No images to show :)'

PredictionsCorrector

show_corrections[source][test]

show_corrections(ncols:int, **fig_kw) No tests found for show_corrections. To contribute a test please refer to this guide and this discussion.

Shows a grid of images whose predictions have been corrected.

When you finished interacting with a widget, you can take a look at the corrections that were made. show_corrections displays image’s index in the test set, previous label and the new label.

  1. wgt.show_corrections(ncols=6, figsize=(9, 7))

widgets.image_cleaner - 图2

corrected_labels[source][test]

corrected_labels() → List[str] No tests found for corrected_labels. To contribute a test please refer to this guide and this discussion.

Returns labels for the entire test set with corrections applied.

  1. show_doc(ImageDownloader)

class ImageDownloader[source][test]

ImageDownloader(path:PathOrStr='data') Tests found for ImageDownloader:

  • pytest -sv tests/test_widgets_image_cleaner.py::test_image_downloader_with_path [source]

To run tests please refer to this guide.

Displays a widget that allows searching and downloading images from google images search in a Jupyter Notebook or Lab.

ImageDownloader widget gives you a way to quickly bootstrap your image dataset without leaving the notebook. It searches and downloads images that match the search criteria and resolution / quality requirements and stores them on your filesystem within the provided path.

Images for each search query (or label) are stored in a separate folder within path. For example, if you pupulate tiger with a path setup to ./data, you’ll get a folder ./data/tiger/ with the tiger images in it.

ImageDownloader will automatically clean up and verify the downloaded images with verify_images() after downloading them.

  1. path = Config.data_path()/'image_downloader'
  2. os.makedirs(path, exist_ok=True)
  3. ImageDownloader(path)
  1. <fastai.widgets.image_downloader.ImageDownloader at 0x7f236c056358>

Downloading images in python scripts outside Jupyter notebooks

  1. path = Config.data_path()/'image_downloader'
  2. files = download_google_images(path, 'aussie shepherd', size='>1024*768', n_images=30)
  3. len(files)

100.00% [30/30 00:00<00:00]

100.00% [30/30 00:00<00:00]

  1. 30

download_google_images[source][test]

download_google_images(path:PathOrStr, search_term:str, size:str=`’>400300’`*, n_images:int=10, format:str='jpg', max_workers:int=8, timeout:int=4) → FilePathList No tests found for download_google_images. To contribute a test please refer to this guide and this discussion.

Search for n_images images on Google, matching search_term and size requirements, download them into path/search_term and verify them, using max_workers threads.

After populating images with ImageDownloader, you can get a an ImageDataBunch by calling ImageDataBunch.from_folder(path, size=size), or using the data block API.

  1. # Setup path and labels to search for
  2. path = Config.data_path()/'image_downloader'
  3. labels = ['boston terrier', 'french bulldog']
  4. # Download images
  5. for label in labels:
  6. download_google_images(path, label, size='>400*300', n_images=50)
  7. # Build a databunch and train!
  8. src = (ImageList.from_folder(path)
  9. .split_by_rand_pct()
  10. .label_from_folder()
  11. .transform(get_transforms(), size=224))
  12. db = src.databunch(bs=16, num_workers=0)
  13. learn = cnn_learner(db, models.resnet34, metrics=[accuracy])
  14. learn.fit_one_cycle(3)

100.00% [50/50 00:00<00:00]

100.00% [50/50 00:00<00:00]

  1. cannot identify image file <_io.BufferedReader name='/home/ubuntu/.fastai/data/image_downloader/boston terrier/00000044.jpg'>
  2. cannot identify image file <_io.BufferedReader name='/home/ubuntu/.fastai/data/image_downloader/boston terrier/00000014.jpg'>

Interrupted

  1. ---------------------------------------------------------------------------
  2. _RemoteTraceback Traceback (most recent call last)
  3. _RemoteTraceback:
  4. """
  5. Traceback (most recent call last):
  6. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
  7. (self._dns_host, self.port), self.timeout, **extra_kw)
  8. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
  9. for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  10. File "/home/ubuntu/anaconda3/lib/python3.7/socket.py", line 748, in getaddrinfo
  11. for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
  12. socket.gaierror: [Errno -3] Temporary failure in name resolution
  13. During handling of the above exception, another exception occurred:
  14. Traceback (most recent call last):
  15. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
  16. chunked=chunked)
  17. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 343, in _make_request
  18. self._validate_conn(conn)
  19. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 839, in _validate_conn
  20. conn.connect()
  21. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connection.py", line 301, in connect
  22. conn = self._new_conn()
  23. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connection.py", line 168, in _new_conn
  24. self, "Failed to establish a new connection: %s" % e)
  25. urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7f236da84358>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
  26. During handling of the above exception, another exception occurred:
  27. Traceback (most recent call last):
  28. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
  29. timeout=timeout
  30. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
  31. _stacktrace=sys.exc_info()[2])
  32. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/urllib3/util/retry.py", line 398, in increment
  33. raise MaxRetryError(_pool, url, error or ResponseError(cause))
  34. urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='frenchbulldogpuppiesflorida.com', port=443): Max retries exceeded with url: /wp-content/uploads/2018/04/cropped-frenchbulldog2-french840-2.jpg (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f236da84358>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
  35. During handling of the above exception, another exception occurred:
  36. Traceback (most recent call last):
  37. File "/home/ubuntu/anaconda3/lib/python3.7/concurrent/futures/process.py", line 232, in _process_worker
  38. r = call_item.fn(*call_item.args, **call_item.kwargs)
  39. File "/home/ubuntu/fastai/fastai/widgets/image_downloader.py", line 179, in _download_single_image
  40. download_url(img_tuple[1], label_path/fname, timeout=timeout)
  41. File "/home/ubuntu/fastai/fastai/core.py", line 177, in download_url
  42. u = s.get(url, stream=True, timeout=timeout)
  43. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
  44. return self.request('GET', url, **kwargs)
  45. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
  46. resp = self.send(prep, **send_kwargs)
  47. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
  48. r = adapter.send(request, **kwargs)
  49. File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
  50. raise ConnectionError(e, request=request)
  51. requests.exceptions.ConnectionError: HTTPSConnectionPool(host='frenchbulldogpuppiesflorida.com', port=443): Max retries exceeded with url: /wp-content/uploads/2018/04/cropped-frenchbulldog2-french840-2.jpg (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f236da84358>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
  52. """
  53. The above exception was the direct cause of the following exception:
  54. ConnectionError Traceback (most recent call last)
  55. <ipython-input-25-460e5f861575> in <module>
  56. 5 # Download images
  57. 6 for label in labels:
  58. ----> 7 download_google_images(path, label, size='>400*300', n_images=50)
  59. 8
  60. 9 # Build a databunch and train!
  61. ~/fastai/fastai/widgets/image_downloader.py in download_google_images(path, search_term, size, n_images, format, max_workers, timeout)
  62. 86 if n_images <= 100: img_tuples = _fetch_img_tuples(search_url, format=format, n_images=n_images)
  63. 87 else: img_tuples = _fetch_img_tuples_webdriver(search_url, format=format, n_images=n_images)
  64. ---> 88 downloaded_images = _download_images(label_path, img_tuples, max_workers=max_workers, timeout=timeout)
  65. 89 if len(downloaded_images) == 0: raise RuntimeError(f"Couldn't download any images.")
  66. 90 verify_images(label_path, max_workers=max_workers)
  67. ~/fastai/fastai/widgets/image_downloader.py in _download_images(label_path, img_tuples, max_workers, timeout)
  68. 165 """
  69. 166 os.makedirs(Path(label_path), exist_ok=True)
  70. --> 167 parallel( partial(_download_single_image, label_path, timeout=timeout), img_tuples, max_workers=max_workers)
  71. 168 return get_image_files(label_path)
  72. 169
  73. ~/fastai/fastai/core.py in parallel(func, arr, max_workers)
  74. 326 futures = [ex.submit(func,o,i) for i,o in enumerate(arr)]
  75. 327 results = []
  76. --> 328 for f in progress_bar(concurrent.futures.as_completed(futures), total=len(arr)): results.append(f.result())
  77. 329 if any([o is not None for o in results]): return results
  78. 330
  79. ~/anaconda3/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
  80. 423 raise CancelledError()
  81. 424 elif self._state == FINISHED:
  82. --> 425 return self.__get_result()
  83. 426
  84. 427 self._condition.wait(timeout)
  85. ~/anaconda3/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
  86. 382 def __get_result(self):
  87. 383 if self._exception:
  88. --> 384 raise self._exception
  89. 385 else:
  90. 386 return self._result
  91. ConnectionError: None: Max retries exceeded with url: /wp-content/uploads/2018/04/cropped-frenchbulldog2-french840-2.jpg (Caused by None)

Downloading more than a hundred images

To fetch more than a hundred images, ImageDownloader uses selenium and chromedriver to scroll through the Google Images search results page and scrape image URLs. They’re not required as dependencies by default. If you don’t have them installed on your system, the widget will show you an error message.

To install selenium, just pip install selenium in your fastai environment.

On a mac, you can install chromedriver with brew cask install chromedriver.

On Ubuntu Take a look at the latest Chromedriver version available, then something like:

  1. wget https://chromedriver.storage.googleapis.com/2.45/chromedriver_linux64.zip
  2. unzip chromedriver_linux64.zip

Note that downloading under 100 images doesn’t require any dependencies other than fastai itself, however downloading more than a hundred images uses selenium and chromedriver.

size can be one of:

  1. '>400*300'
  2. '>640*480'
  3. '>800*600'
  4. '>1024*768'
  5. '>2MP'
  6. '>4MP'
  7. '>6MP'
  8. '>8MP'
  9. '>10MP'
  10. '>12MP'
  11. '>15MP'
  12. '>20MP'
  13. '>40MP'
  14. '>70MP'

Methods


Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Jan 5, 2021