sentiment140-word-gap/roberta_with_year_from_scratch/1_train_tokenizer.py

19 lines
372 B
Python
Raw Permalink Normal View History

2021-11-11 10:49:15 +01:00
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
paths = ['./train_in.csv']
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
# Customize training
tokenizer.train(files=paths, vocab_size=50265, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
tokenizer.save_model("./tokenizer_model")