mirror of
https://github.com/mii443/tokenizers.git
synced 2025-08-22 16:25:30 +00:00
[FIX] In CharBPETokenizer, when Vocab or merges is None, unk_token cannot be used. (#1136)
* [fix] Use unk_token In SentencePieceBPETokenizer, when Vocab or merges is None, unk_token cannot be used. * [fix] If unk_token is None, this case is also considered. * Update bindings/python/py_src/tokenizers/implementations/sentencepiece_bpe.py Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com> * [FIX] In CharBPETokenizer, Use unk_token. In CharBPETokenizer, when Vocab or merges is None, unk_token cannot be used. * Update bindings/python/py_src/tokenizers/implementations/char_level_bpe.py Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com> * Update bindings/python/py_src/tokenizers/implementations/char_level_bpe.py Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com> Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
This commit is contained in:
@ -45,7 +45,7 @@ class CharBPETokenizer(BaseTokenizer):
|
||||
)
|
||||
)
|
||||
else:
|
||||
tokenizer = Tokenizer(BPE())
|
||||
tokenizer = Tokenizer(BPE(unk_token=str(unk_token), dropout=dropout, end_of_word_suffix=suffix))
|
||||
|
||||
if tokenizer.token_to_id(str(unk_token)) is not None:
|
||||
tokenizer.add_special_tokens([str(unk_token)])
|
||||
|
Reference in New Issue
Block a user