Text core
Basic function to preprocess text before assembling it in a DataLoaders
.
/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Preprocessing rules
The following are rules applied to texts before or after it’s tokenized.
spec_add_spaces
[source]
spec_add_spaces
(t
)
Add spaces around / and #
test_eq(spec_add_spaces('#fastai'), ' # fastai')
test_eq(spec_add_spaces('/fastai'), ' / fastai')
test_eq(spec_add_spaces('fastai'), ' fastai')
rm_useless_spaces
[source]
rm_useless_spaces
(t
)
Remove multiple spaces
test_eq(rm_useless_spaces('a b c'), 'a b c')
replace_rep
[source]
replace_rep
(t
)
Replace repetitions at the character level: cccc — TK_REP 4 c
It starts replacing at 3 repetitions of the same character or more.
test_eq(replace_rep('aa'), 'aa')
test_eq(replace_rep('aaaa'), f' {TK_REP} 4 a ')
replace_wrep
[source]
replace_wrep
(t
)
Replace word repetitions: word word word word — TK_WREP 4 word
It starts replacing at 3 repetitions of the same word or more.
test_eq(replace_wrep('ah ah'), 'ah ah')
test_eq(replace_wrep('ah ah ah'), f' {TK_WREP} 3 ah ')
test_eq(replace_wrep('ah ah ah ah'), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah '), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah.'), f' {TK_WREP} 4 ah .')
test_eq(replace_wrep('ah ah ahi'), f'ah ah ahi')
fix_html
[source]
fix_html
(x
)
Various messy things we’ve seen in documents
test_eq(fix_html('#39;bli#146;'), "'bli'")
test_eq(fix_html('Sarah amp; Duck...'), 'Sarah & Duck …')
test_eq(fix_html('a nbsp; #36;'), 'a $')
test_eq(fix_html('" <unk>'), f'" {UNK}')
test_eq(fix_html('quot; @[email protected] @[email protected] '), "' .-")
test_eq(fix_html('<br />textn'), 'ntextn')
replace_all_caps
[source]
replace_all_caps
(t
)
Replace tokens in ALL CAPS by their lower version and add TK_UP
before.
test_eq(replace_all_caps("I'M SHOUTING"), f"{TK_UP} i'm {TK_UP} shouting")
test_eq(replace_all_caps("I'm speaking normally"), "I'm speaking normally")
test_eq(replace_all_caps("I am speaking normally"), "i am speaking normally")
replace_maj
[source]
replace_maj
(t
)
Replace tokens in Sentence Case by their lower version and add TK_MAJ
before.
test_eq(replace_maj("Jeremy Howard"), f'{TK_MAJ} jeremy {TK_MAJ} howard')
test_eq(replace_maj("I don't think there is any maj here"), ("i don't think there is any maj here"),)
lowercase
[source]
lowercase
(t
,add_bos
=True
,add_eos
=False
)
Converts t
to lowercase
replace_space
[source]
replace_space
(t
)
Replace embedded spaces in a token with unicode line char to allow for split/join
Tokenizing
A tokenizer is a class that must implement __call__
. This method receives a iterator of texts and must return a generator with their tokenized versions. Here is the most basic example:
class
BaseTokenizer
[source]
BaseTokenizer
(split_char
=' '
, **kwargs
)
Basic tokenizer that just splits on spaces
tok = BaseTokenizer()
test_eq(tok(["This is a text"]), [["This", "is", "a", "text"]])
tok = BaseTokenizer('x')
test_eq(tok(["This is a text"]), [["This is a te", "t"]])
class
SpacyTokenizer
[source]
SpacyTokenizer
(lang
='en'
,special_toks
=None
,buf_sz
=5000
)
Spacy tokenizer for lang
tok = SpacyTokenizer()
inp,exp = "This isn't the easiest text.",["This", "is", "n't", "the", "easiest", "text", "."]
test_eq(L(tok([inp,inp])), [exp,exp])
class
TokenizeWithRules
[source]
TokenizeWithRules
(tok
,rules
=None
,post_rules
=None
)
A wrapper around tok
which applies rules
, then tokenizes, then applies post_rules
f = TokenizeWithRules(BaseTokenizer(),rules=[replace_all_caps])
test_eq(f(["THIS isn't a problem"]), [[TK_UP, 'this', "isn't", 'a', 'problem']])
f = TokenizeWithRules(SpacyTokenizer())
test_eq(f(["This isn't a problem"]), [[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem']])
f = TokenizeWithRules(BaseTokenizer(split_char="'"), rules=[])
test_eq(f(["This isn't a problem"]), [['This▁isn', 't▁a▁problem']])
The main function that will be called during one of the processes handling tokenization. It will iterate through the batch
of texts, apply them rules
and tokenize them.
texts = ["this is a text", "this is another text"]
tok = TokenizeWithRules(BaseTokenizer(), texts.__getitem__)
test_eq(tok([0,1]), [['this', 'is', 'a', 'text'],['this', 'is', 'another', 'text']])
tokenize1
[source]
tokenize1
(text
,tok
,rules
=None
,post_rules
=None
)
Call TokenizeWithRules
with a single text
test_eq(tokenize1("This isn't a problem", SpacyTokenizer()),
[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem'])
test_eq(tokenize1("This isn't a problem", tok=BaseTokenizer(), rules=[]),
['This',"isn't",'a','problem'])
parallel_tokenize
[source]
parallel_tokenize
(items
,tok
=None
,rules
=None
,n_workers
=2
, **kwargs
)
Calls optional setup
on tok
before launching TokenizeWithRules
using `parallel_gen
Note that since this uses parallel_gen
behind the scenes, the generator returned contains tuples of indices and results. There is no guarantee that the results are returned in order, so you should sort by the first item of the tuples (the indices) if you need them ordered.
res = parallel_tokenize(['0 1', '1 2'], rules=[], n_workers=2)
idxs,toks = zip(*L(res).sorted(itemgetter(0)))
test_eq(toks, [['0','1'],['1','2']])
Tokenize texts in files
Preprocessing function for texts in filenames. Tokenized texts will be saved in a similar fashion in a directory suffixed with _tok
in the parent folder of path
(override with output_dir
). This directory is the return value.
tokenize_folder
[source]
tokenize_folder
(path
,extensions
=None
,folders
=None
,output_dir
=None
,skip_if_exists
=True
,output_names
=None
,n_workers
=2
,rules
=None
,tok
=None
,encoding
='utf8'
)
Tokenize text files in path
in parallel using n_workers
The result will be in output_dir
(defaults to a folder in the same parent directory as path
, with _tok
added to path.name
) with the same structure as in path
. Tokenized texts for a given file will be in the file having the same name in output_dir
. Additionally, a file with a .len suffix contains the number of tokens and the count of all words is stored in output_dir/counter.pkl
.
extensions
will default to ['.txt']
and all text files in path
are treated unless you specify a list of folders in include
. rules
(that defaults to defaults.text_proc_rules
) are applied to each text before going in the tokenizer.
tokenize_files
[source]
tokenize_files
(files
,path
,output_dir
,output_names
=None
,n_workers
=2
,rules
=None
,tok
=None
,encoding
='utf8'
,skip_if_exists
=False
)
Tokenize text files
in parallel using n_workers
Tokenize texts in a dataframe
tokenize_texts
[source]
tokenize_texts
(texts
,n_workers
=2
,rules
=None
,tok
=None
)
Tokenize texts
in parallel using n_workers
tokenize_df
[source]
tokenize_df
(df
,text_cols
,n_workers
=2
,rules
=None
,mark_fields
=None
,tok
=None
,tok_text_col
='text'
)
Tokenize texts in df[text_cols]
in parallel using n_workers
and stores them in df[tok_text_col]
This function returns a new dataframe with the same non-text columns, a column named text that contains the tokenized texts and a column named text_lengths that contains their respective length. It also returns a counter of all seen words to quickly build a vocabulary afterward.
rules
(that defaults to defaults.text_proc_rules
) are applied to each text before going in the tokenizer. If mark_fields
isn’t specified, it defaults to False
when there is a single text column, True
when there are several. In that case, the texts in each of those columns are joined with FLD
markers followed by the number of the field.
tokenize_csv
[source]
tokenize_csv
(fname
,text_cols
,outname
=None
,n_workers
=4
,rules
=None
,mark_fields
=None
,tok
=None
,header
='infer'
,chunksize
=50000
)
Tokenize texts in the text_cols
of the csv fname
in parallel using n_workers
load_tokenized_csv
[source]
load_tokenized_csv
(fname
)
Utility function to quickly load a tokenized csv ans the corresponding counter
The result will be written in a new csv file in outname
(defaults to the same as fname
with the suffix _tok.csv
) and will have the same header as the original file, the same non-text columns, a text and a text_lengths column as described in tokenize_df
.
rules
(that defaults to defaults.text_proc_rules
) are applied to each text before going in the tokenizer. If mark_fields
isn’t specified, it defaults to False
when there is a single text column, True
when there are several. In that case, the texts in each of those columns are joined with FLD
markers followed by the number of the field.
The csv file is opened with header
and optionally with blocks of chunksize
at a time. If this argument is passed, each chunk is processed independently and saved in the output file to save memory usage.
def _prepare_texts(tmp_d):
"Prepare texts in a folder struct in tmp_d, a csv file and returns a dataframe"
path = Path(tmp_d)/'tmp'
path.mkdir()
for d in ['a', 'b', 'c']:
(path/d).mkdir()
for i in range(5):
with open(path/d/f'text{i}.txt', 'w') as f: f.write(f"This is an example of text {d} {i}")
texts = [f"This is an example of text {d} {i}" for i in range(5) for d in ['a', 'b', 'c']]
df = pd.DataFrame({'text': texts, 'label': list(range(15))}, columns=['text', 'label'])
csv_fname = tmp_d/'input.csv'
df.to_csv(csv_fname, index=False)
return path,df,csv_fname
class
Tokenizer
[source]
Tokenizer
(tok
,rules
=None
,counter
=None
,lengths
=None
,mode
=None
,sep
=' '
) ::Transform
Provides a consistent Transform
interface to tokenizers operating on DataFrame
s and folders
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
dsets = Datasets(items, [Tokenizer.from_folder(path)], splits=splits)
print(dsets.train[0])
dsets = Datasets(df, [Tokenizer.from_df('text')], splits=splits)
print(dsets.train[0][0].text)
(['xxbos', 'xxmaj', 'this', 'is', 'an', 'example', 'of', 'text', 'b', '0'],)
('xxbos', 'xxmaj', 'this', 'is', 'an', 'example', 'of', 'text', 'b', '1')
tst = test_set(dsets, ['This is a test', 'this is another test'])
test_eq(tst, [(['xxbos', 'xxmaj', 'this','is','a','test'],),
(['xxbos','this','is','another','test'],)])
Sentencepiece
class
SentencePieceTokenizer
[source]
SentencePieceTokenizer
(lang
='en'
,special_toks
=None
,sp_model
=None
,vocab_sz
=None
,max_vocab_sz
=30000
,model_type
='unigram'
,char_coverage
=None
,cache_dir
='tmp'
)
SentencePiece tokenizer for lang
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'label': list(range(10))}, columns=['text', 'label'])
out,cnt = tokenize_df(df, text_cols='text', tok=SentencePieceTokenizer(vocab_sz=34), n_workers=1)
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
tok = SentencePieceTokenizer(special_toks=[])
dsets = Datasets(items, [Tokenizer.from_folder(path, tok=tok)], splits=splits)
print(dsets.train[0][0])
with warnings.catch_warnings():
dsets = Datasets(df, [Tokenizer.from_df('text', tok=tok)], splits=splits)
print(dsets.train[0][0].text)
['▁xx', 'b', 'o', 's', '▁xx', 'm', 'a', 'j', '▁t', 'h', 'i', 's', '▁', 'i', 's', '▁a', 'n', '▁', 'ex', 'a', 'm', 'p', 'l', 'e', '▁', 'o', 'f', '▁t', 'ex', 't', '▁a', '▁', '1']
['▁xx', 'b', 'o', 's', '▁xx', 'm', 'a', 'j', '▁t', 'h', 'i', 's', '▁', 'i', 's', '▁a', 'n', '▁', 'ex', 'a', 'm', 'p', 'l', 'e', '▁', 'o', 'f', '▁t', 'ex', 't', '▁', 'b', '▁', '2']
/home/yizhang/anaconda3/envs/fastai/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order)
©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021