UG-final/all_models.ipynb

522 KiB
Raw Permalink Blame History

Uczenie Głębokie - projekt

W projekcie wykorzystano dataset emotion, zawierający wpisy nacechowane określonymi emocjami.


Labels:

  • 0 - sadness
  • 1 - joy
  • 2 - love
  • 3 - anger
  • 4 - fear
  • 5 - surprise

REQUIREMENTS

!pip3 install transformers scikit-learn accelerate evaluate datasets torch sentencepiece torchvision sacrebleu
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: transformers in /usr/local/lib/python3.8/dist-packages (4.26.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.8/dist-packages (1.0.2)
Requirement already satisfied: accelerate in /usr/local/lib/python3.8/dist-packages (0.16.0)
Requirement already satisfied: evaluate in /usr/local/lib/python3.8/dist-packages (0.4.0)
Requirement already satisfied: datasets in /usr/local/lib/python3.8/dist-packages (2.9.0)
Requirement already satisfied: torch in /usr/local/lib/python3.8/dist-packages (1.13.1+cu116)
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.8/dist-packages (0.1.97)
Requirement already satisfied: torchvision in /usr/local/lib/python3.8/dist-packages (0.14.1+cu116)
Collecting sacrebleu
  Downloading sacrebleu-2.3.1-py3-none-any.whl (118 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.9/118.9 KB 7.0 MB/s eta 0:00:00
[?25hRequirement already satisfied: huggingface-hub<1.0,>=0.11.0 in /usr/local/lib/python3.8/dist-packages (from transformers) (0.12.0)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.8/dist-packages (from transformers) (1.21.6)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /usr/local/lib/python3.8/dist-packages (from transformers) (0.13.2)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.8/dist-packages (from transformers) (4.64.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from transformers) (3.9.0)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.8/dist-packages (from transformers) (23.0)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.8/dist-packages (from transformers) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.8/dist-packages (from transformers) (2022.6.2)
Requirement already satisfied: requests in /usr/local/lib/python3.8/dist-packages (from transformers) (2.25.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (3.1.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.2.0)
Requirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.7.3)
Requirement already satisfied: psutil in /usr/local/lib/python3.8/dist-packages (from accelerate) (5.4.8)
Requirement already satisfied: fsspec[http]>=2021.05.0 in /usr/local/lib/python3.8/dist-packages (from evaluate) (2023.1.0)
Requirement already satisfied: multiprocess in /usr/local/lib/python3.8/dist-packages (from evaluate) (0.70.14)
Requirement already satisfied: pandas in /usr/local/lib/python3.8/dist-packages (from evaluate) (1.3.5)
Requirement already satisfied: xxhash in /usr/local/lib/python3.8/dist-packages (from evaluate) (3.2.0)
Requirement already satisfied: responses<0.19 in /usr/local/lib/python3.8/dist-packages (from evaluate) (0.18.0)
Requirement already satisfied: dill in /usr/local/lib/python3.8/dist-packages (from evaluate) (0.3.6)
Requirement already satisfied: pyarrow>=6.0.0 in /usr/local/lib/python3.8/dist-packages (from datasets) (9.0.0)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.8/dist-packages (from datasets) (3.8.3)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.8/dist-packages (from torch) (4.4.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.8/dist-packages (from torchvision) (7.1.2)
Requirement already satisfied: lxml in /usr/local/lib/python3.8/dist-packages (from sacrebleu) (4.9.2)
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.8/dist-packages (from sacrebleu) (0.8.10)
Collecting portalocker
  Downloading portalocker-2.7.0-py2.py3-none-any.whl (15 kB)
Collecting colorama
  Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (1.3.1)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (1.8.2)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (6.0.4)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (2.1.1)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (4.0.2)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (22.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from aiohttp->datasets) (1.3.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (1.26.14)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (2022.12.7)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from requests->transformers) (4.0.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas->evaluate) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas->evaluate) (2022.7.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-dateutil>=2.7.3->pandas->evaluate) (1.15.0)
Installing collected packages: portalocker, colorama, sacrebleu
Successfully installed colorama-0.4.6 portalocker-2.7.0 sacrebleu-2.3.1
import os
import json
from pathlib import Path
from typing import Dict, List
from datasets import load_dataset
import torch
import pandas as pd

os.environ['TOKENIZERS_PARALLELISM'] = 'true'

DATA PREP

!mkdir -p data
!python data_prep.py
Downloading builder script: 100% 3.97k/3.97k [00:00<00:00, 3.28MB/s]
Downloading metadata: 100% 3.28k/3.28k [00:00<00:00, 2.79MB/s]
Downloading readme: 100% 8.78k/8.78k [00:00<00:00, 7.56MB/s]
No config specified, defaulting to: emotion/split
Downloading and preparing dataset emotion/split to /root/.cache/huggingface/datasets/emotion/split/1.0.0/cca5efe2dfeb58c1d098e0f9eeb200e9927d889b5a03c67097275dfb5fe463bd...
Downloading data files:   0% 0/3 [00:00<?, ?it/s]
Downloading data:   0% 0.00/592k [00:00<?, ?B/s]
Downloading data:   9% 52.2k/592k [00:00<00:01, 438kB/s]
Downloading data: 100% 592k/592k [00:00<00:00, 2.36MB/s]
Downloading data files:  33% 1/3 [00:01<00:02,  1.35s/it]
Downloading data: 100% 74.0k/74.0k [00:00<00:00, 7.40MB/s]
Downloading data files:  67% 2/3 [00:01<00:00,  1.10it/s]
Downloading data: 100% 74.9k/74.9k [00:00<00:00, 8.28MB/s]
Downloading data files: 100% 3/3 [00:02<00:00,  1.15it/s]
Extracting data files: 100% 3/3 [00:00<00:00, 159.24it/s]
Dataset emotion downloaded and prepared to /root/.cache/huggingface/datasets/emotion/split/1.0.0/cca5efe2dfeb58c1d098e0f9eeb200e9927d889b5a03c67097275dfb5fe463bd. Subsequent calls will reuse this data.
100% 3/3 [00:00<00:00, 736.49it/s]
Saving into: data/train.json
Saving into: data/s2s-train.json
Saving into: data/valid.json
Saving into: data/s2s-valid.json
Saving into: data/test.json
Saving into: data/s2s-test.json
!head data/train.json
{"label": 0, "text": "i didnt feel humiliated"}
{"label": 0, "text": "i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake"}
{"label": 3, "text": "im grabbing a minute to post i feel greedy wrong"}
{"label": 2, "text": "i am ever feeling nostalgic about the fireplace i will know that it is still on the property"}
{"label": 3, "text": "i am feeling grouchy"}
{"label": 0, "text": "ive been feeling a little burdened lately wasnt sure why that was"}
{"label": 5, "text": "ive been taking or milligrams or times recommended amount and ive fallen asleep a lot faster but i also feel like so funny"}
{"label": 4, "text": "i feel as confused about life as a teenager or as jaded as a year old man"}
{"label": 1, "text": "i have been with petronas for years i feel that petronas has performed well and made a huge profit"}
{"label": 2, "text": "i feel romantic too"}
!head data/s2s-train.json
{"label": "sadness", "text": "i didnt feel humiliated"}
{"label": "sadness", "text": "i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake"}
{"label": "anger", "text": "im grabbing a minute to post i feel greedy wrong"}
{"label": "love", "text": "i am ever feeling nostalgic about the fireplace i will know that it is still on the property"}
{"label": "anger", "text": "i am feeling grouchy"}
{"label": "sadness", "text": "ive been feeling a little burdened lately wasnt sure why that was"}
{"label": "surprise", "text": "ive been taking or milligrams or times recommended amount and ive fallen asleep a lot faster but i also feel like so funny"}
{"label": "fear", "text": "i feel as confused about life as a teenager or as jaded as a year old man"}
{"label": "joy", "text": "i have been with petronas for years i feel that petronas has performed well and made a huge profit"}
{"label": "love", "text": "i feel romantic too"}
!wc -l data/*
   2000 data/s2s-test.json
  16000 data/s2s-train.json
   2000 data/s2s-valid.json
   2000 data/test.json
  16000 data/train.json
   2000 data/valid.json
  40000 total
!mkdir -p cache

ROBERTA

  • full data
  • model roberta-base
  • sequnece length: 128
  • training epoch: 1
!python run_glue.py \
  --cache_dir cache/roberta \
  --model_name_or_path roberta-base \
  --train_file data/train.json  \
  --validation_file data/valid.json \
  --test_file data/test.json \
  --per_device_train_batch_size 24 \
  --per_device_eval_batch_size 24 \
  --do_train \
  --do_eval \
  --do_predict \
  --max_seq_length 128 \
  --learning_rate 2e-5 \
  --num_train_epochs 1 \
  --output_dir out/emotion/roberta  \
  --overwrite_output_dir
2023-02-14 23:03:23.770695: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-14 23:03:24.398188: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-02-14 23:03:26.384825: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-14 23:03:26.384975: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-14 23:03:26.384997: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:__main__:Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
INFO:__main__:Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=True,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=2e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=passive,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=out/emotion/roberta/runs/Feb14_23-03-30_fc0011e45a00,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
optim=adamw_hf,
output_dir=out/emotion/roberta,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=24,
per_device_train_batch_size=24,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=out/emotion/roberta,
save_on_each_node=False,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
INFO:__main__:load a local file for train: data/train.json
INFO:__main__:load a local file for validation: data/valid.json
INFO:__main__:load a local file for test: data/test.json
WARNING:datasets.builder:Using custom data configuration default-e1b3a7f886502404
INFO:datasets.info:Loading Dataset Infos from /usr/local/lib/python3.8/dist-packages/datasets/packaged_modules/json
INFO:datasets.builder:Generating dataset json (/content/cache/roberta/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Downloading and preparing dataset json/default to /content/cache/roberta/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100% 3/3 [00:00<00:00, 14546.72it/s]
INFO:datasets.download.download_manager:Downloading took 0.0 min
INFO:datasets.download.download_manager:Checksum Computation took 0.0 min
Extracting data files: 100% 3/3 [00:00<00:00, 2116.91it/s]
INFO:datasets.utils.info_utils:Unable to verify checksums.
INFO:datasets.builder:Generating train split
INFO:datasets.builder:Generating validation split
INFO:datasets.builder:Generating test split
INFO:datasets.utils.info_utils:Unable to verify splits sizes.
Dataset json downloaded and prepared to /content/cache/roberta/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
100% 3/3 [00:00<00:00, 650.01it/s]
Downloading (…)lve/main/config.json: 100% 481/481 [00:00<00:00, 89.2kB/s]
[INFO|configuration_utils.py:653] 2023-02-14 23:03:31,805 >> loading configuration file config.json from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json
[INFO|configuration_utils.py:705] 2023-02-14 23:03:31,805 >> Model config RobertaConfig {
  "_name_or_path": "roberta-base",
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1",
    "2": "LABEL_2",
    "3": "LABEL_3",
    "4": "LABEL_4",
    "5": "LABEL_5"
  },
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1,
    "LABEL_2": 2,
    "LABEL_3": 3,
    "LABEL_4": 4,
    "LABEL_5": 5
  },
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.23.1",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 50265
}

[INFO|tokenization_auto.py:418] 2023-02-14 23:03:31,898 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:653] 2023-02-14 23:03:31,989 >> loading configuration file config.json from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json
[INFO|configuration_utils.py:705] 2023-02-14 23:03:31,990 >> Model config RobertaConfig {
  "_name_or_path": "roberta-base",
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.23.1",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 50265
}

Downloading (…)olve/main/vocab.json: 100% 899k/899k [00:00<00:00, 9.68MB/s]
Downloading (…)olve/main/merges.txt: 100% 456k/456k [00:00<00:00, 4.94MB/s]
Downloading (…)/main/tokenizer.json: 100% 1.36M/1.36M [00:00<00:00, 11.9MB/s]
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:03:33,211 >> loading file vocab.json from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/vocab.json
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:03:33,211 >> loading file merges.txt from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/merges.txt
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:03:33,211 >> loading file tokenizer.json from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/tokenizer.json
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:03:33,211 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:03:33,211 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:03:33,211 >> loading file tokenizer_config.json from cache at None
[INFO|configuration_utils.py:653] 2023-02-14 23:03:33,211 >> loading configuration file config.json from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json
[INFO|configuration_utils.py:705] 2023-02-14 23:03:33,212 >> Model config RobertaConfig {
  "_name_or_path": "roberta-base",
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.23.1",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 50265
}

INFO:__main__:Using implementation from class: AutoModelForSequenceClassification
Downloading (…)"pytorch_model.bin";: 100% 501M/501M [00:04<00:00, 105MB/s]
[INFO|modeling_utils.py:2156] 2023-02-14 23:03:38,350 >> loading weights file pytorch_model.bin from cache at cache/roberta/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/pytorch_model.bin
[WARNING|modeling_utils.py:2596] 2023-02-14 23:03:39,757 >> Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'lm_head.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.bias', 'lm_head.dense.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:2608] 2023-02-14 23:03:39,757 >> Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.weight', 'classifier.dense.bias', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.


Frozen layers:
[('roberta.encoder.layer.0.attention.self.query.weight', False), ('roberta.encoder.layer.0.attention.self.query.bias', False), ('roberta.encoder.layer.0.attention.self.key.weight', False), ('roberta.encoder.layer.0.attention.self.key.bias', False), ('roberta.encoder.layer.0.attention.self.value.weight', False), ('roberta.encoder.layer.0.attention.self.value.bias', False), ('roberta.encoder.layer.0.attention.output.dense.weight', False), ('roberta.encoder.layer.0.attention.output.dense.bias', False), ('roberta.encoder.layer.0.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.0.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.0.intermediate.dense.weight', False), ('roberta.encoder.layer.0.intermediate.dense.bias', False), ('roberta.encoder.layer.0.output.dense.weight', False), ('roberta.encoder.layer.0.output.dense.bias', False), ('roberta.encoder.layer.0.output.LayerNorm.weight', False), ('roberta.encoder.layer.0.output.LayerNorm.bias', False), ('roberta.encoder.layer.2.attention.self.query.weight', False), ('roberta.encoder.layer.2.attention.self.query.bias', False), ('roberta.encoder.layer.2.attention.self.key.weight', False), ('roberta.encoder.layer.2.attention.self.key.bias', False), ('roberta.encoder.layer.2.attention.self.value.weight', False), ('roberta.encoder.layer.2.attention.self.value.bias', False), ('roberta.encoder.layer.2.attention.output.dense.weight', False), ('roberta.encoder.layer.2.attention.output.dense.bias', False), ('roberta.encoder.layer.2.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.2.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.2.intermediate.dense.weight', False), ('roberta.encoder.layer.2.intermediate.dense.bias', False), ('roberta.encoder.layer.2.output.dense.weight', False), ('roberta.encoder.layer.2.output.dense.bias', False), ('roberta.encoder.layer.2.output.LayerNorm.weight', False), ('roberta.encoder.layer.2.output.LayerNorm.bias', False), ('roberta.encoder.layer.4.attention.self.query.weight', False), ('roberta.encoder.layer.4.attention.self.query.bias', False), ('roberta.encoder.layer.4.attention.self.key.weight', False), ('roberta.encoder.layer.4.attention.self.key.bias', False), ('roberta.encoder.layer.4.attention.self.value.weight', False), ('roberta.encoder.layer.4.attention.self.value.bias', False), ('roberta.encoder.layer.4.attention.output.dense.weight', False), ('roberta.encoder.layer.4.attention.output.dense.bias', False), ('roberta.encoder.layer.4.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.4.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.4.intermediate.dense.weight', False), ('roberta.encoder.layer.4.intermediate.dense.bias', False), ('roberta.encoder.layer.4.output.dense.weight', False), ('roberta.encoder.layer.4.output.dense.bias', False), ('roberta.encoder.layer.4.output.LayerNorm.weight', False), ('roberta.encoder.layer.4.output.LayerNorm.bias', False), ('roberta.encoder.layer.6.attention.self.query.weight', False), ('roberta.encoder.layer.6.attention.self.query.bias', False), ('roberta.encoder.layer.6.attention.self.key.weight', False), ('roberta.encoder.layer.6.attention.self.key.bias', False), ('roberta.encoder.layer.6.attention.self.value.weight', False), ('roberta.encoder.layer.6.attention.self.value.bias', False), ('roberta.encoder.layer.6.attention.output.dense.weight', False), ('roberta.encoder.layer.6.attention.output.dense.bias', False), ('roberta.encoder.layer.6.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.6.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.6.intermediate.dense.weight', False), ('roberta.encoder.layer.6.intermediate.dense.bias', False), ('roberta.encoder.layer.6.output.dense.weight', False), ('roberta.encoder.layer.6.output.dense.bias', False), ('roberta.encoder.layer.6.output.LayerNorm.weight', False), ('roberta.encoder.layer.6.output.LayerNorm.bias', False), ('roberta.encoder.layer.8.attention.self.query.weight', False), ('roberta.encoder.layer.8.attention.self.query.bias', False), ('roberta.encoder.layer.8.attention.self.key.weight', False), ('roberta.encoder.layer.8.attention.self.key.bias', False), ('roberta.encoder.layer.8.attention.self.value.weight', False), ('roberta.encoder.layer.8.attention.self.value.bias', False), ('roberta.encoder.layer.8.attention.output.dense.weight', False), ('roberta.encoder.layer.8.attention.output.dense.bias', False), ('roberta.encoder.layer.8.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.8.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.8.intermediate.dense.weight', False), ('roberta.encoder.layer.8.intermediate.dense.bias', False), ('roberta.encoder.layer.8.output.dense.weight', False), ('roberta.encoder.layer.8.output.dense.bias', False), ('roberta.encoder.layer.8.output.LayerNorm.weight', False), ('roberta.encoder.layer.8.output.LayerNorm.bias', False), ('roberta.encoder.layer.10.attention.self.query.weight', False), ('roberta.encoder.layer.10.attention.self.query.bias', False), ('roberta.encoder.layer.10.attention.self.key.weight', False), ('roberta.encoder.layer.10.attention.self.key.bias', False), ('roberta.encoder.layer.10.attention.self.value.weight', False), ('roberta.encoder.layer.10.attention.self.value.bias', False), ('roberta.encoder.layer.10.attention.output.dense.weight', False), ('roberta.encoder.layer.10.attention.output.dense.bias', False), ('roberta.encoder.layer.10.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.10.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.10.intermediate.dense.weight', False), ('roberta.encoder.layer.10.intermediate.dense.bias', False), ('roberta.encoder.layer.10.output.dense.weight', False), ('roberta.encoder.layer.10.output.dense.bias', False), ('roberta.encoder.layer.10.output.LayerNorm.weight', False), ('roberta.encoder.layer.10.output.LayerNorm.bias', False)] 


Running tokenizer on dataset:   0% 0/16 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/cache/roberta/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-92255ed6858d2f1f.arrow
Running tokenizer on dataset: 100% 16/16 [00:00<00:00, 19.60ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/cache/roberta/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-541fdac4841bc78b.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00,  7.48ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/cache/roberta/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-7c07449907d4dcfe.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 20.93ba/s]
INFO:__main__:Sample 10476 of the training set: {'label': 0, 'text': 'i do find new friends i m going to try extra hard to make them stay and if i decide that i don t want to feel hurt again and just ride out the last year of school on my own i m going to have to try extra hard not to care what people think of me being a loner', 'input_ids': [0, 118, 109, 465, 92, 964, 939, 475, 164, 7, 860, 1823, 543, 7, 146, 106, 1095, 8, 114, 939, 2845, 14, 939, 218, 326, 236, 7, 619, 2581, 456, 8, 95, 3068, 66, 5, 94, 76, 9, 334, 15, 127, 308, 939, 475, 164, 7, 33, 7, 860, 1823, 543, 45, 7, 575, 99, 82, 206, 9, 162, 145, 10, 784, 9604, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 1824 of the training set: {'label': 1, 'text': 'i asked them to join me in creating a world where all year old girls could grow up feeling hopeful and powerful', 'input_ids': [0, 118, 553, 106, 7, 1962, 162, 11, 2351, 10, 232, 147, 70, 76, 793, 1972, 115, 1733, 62, 2157, 7917, 8, 2247, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 409 of the training set: {'label': 2, 'text': 'i feel when you are a caring person you attract other caring people into your life', 'input_ids': [0, 118, 619, 77, 47, 32, 10, 10837, 621, 47, 5696, 97, 10837, 82, 88, 110, 301, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
[INFO|trainer.py:725] 2023-02-14 23:03:48,468 >> The following columns in the training set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassification.forward`,  you can safely ignore this message.
/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(
[INFO|trainer.py:1607] 2023-02-14 23:03:48,481 >> ***** Running training *****
[INFO|trainer.py:1608] 2023-02-14 23:03:48,481 >>   Num examples = 16000
[INFO|trainer.py:1609] 2023-02-14 23:03:48,481 >>   Num Epochs = 1
[INFO|trainer.py:1610] 2023-02-14 23:03:48,481 >>   Instantaneous batch size per device = 24
[INFO|trainer.py:1611] 2023-02-14 23:03:48,481 >>   Total train batch size (w. parallel, distributed & accumulation) = 24
[INFO|trainer.py:1612] 2023-02-14 23:03:48,481 >>   Gradient Accumulation steps = 1
[INFO|trainer.py:1613] 2023-02-14 23:03:48,481 >>   Total optimization steps = 667
{'loss': 0.8083, 'learning_rate': 5.0074962518740634e-06, 'epoch': 0.75}
 75% 500/667 [00:58<00:18,  8.81it/s][INFO|trainer.py:2656] 2023-02-14 23:04:46,596 >> Saving model checkpoint to out/emotion/roberta/checkpoint-500
[INFO|configuration_utils.py:447] 2023-02-14 23:04:46,597 >> Configuration saved in out/emotion/roberta/checkpoint-500/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:04:47,328 >> Model weights saved in out/emotion/roberta/checkpoint-500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:04:47,329 >> tokenizer config file saved in out/emotion/roberta/checkpoint-500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:04:47,329 >> Special tokens file saved in out/emotion/roberta/checkpoint-500/special_tokens_map.json
100% 666/667 [01:18<00:00,  8.78it/s][INFO|trainer.py:1852] 2023-02-14 23:05:07,473 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)


{'train_runtime': 78.9923, 'train_samples_per_second': 202.551, 'train_steps_per_second': 8.444, 'train_loss': 0.7161429089227359, 'epoch': 1.0}
100% 667/667 [01:18<00:00,  8.45it/s]
[INFO|trainer.py:2656] 2023-02-14 23:05:07,475 >> Saving model checkpoint to out/emotion/roberta
[INFO|configuration_utils.py:447] 2023-02-14 23:05:07,476 >> Configuration saved in out/emotion/roberta/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:05:08,175 >> Model weights saved in out/emotion/roberta/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:05:08,176 >> tokenizer config file saved in out/emotion/roberta/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:05:08,176 >> Special tokens file saved in out/emotion/roberta/special_tokens_map.json
***** train metrics *****
  epoch                    =        1.0
  train_loss               =     0.7161
  train_runtime            = 0:01:18.99
  train_samples            =      16000
  train_samples_per_second =    202.551
  train_steps_per_second   =      8.444
INFO:__main__:*** Evaluate ***
[INFO|trainer.py:725] 2023-02-14 23:05:08,275 >> The following columns in the evaluation set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:05:08,277 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:05:08,277 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:05:08,277 >>   Batch size = 24
100% 84/84 [00:03<00:00, 23.63it/s]
***** eval metrics *****
  epoch                   =        1.0
  eval_accuracy           =      0.889
  eval_loss               =     0.3302
  eval_runtime            = 0:00:03.60
  eval_samples            =       2000
  eval_samples_per_second =    554.126
  eval_steps_per_second   =     23.273
INFO:__main__:*** Predict ***
[INFO|trainer.py:725] 2023-02-14 23:05:11,890 >> The following columns in the test set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:05:11,891 >> ***** Running Prediction *****
[INFO|trainer.py:2909] 2023-02-14 23:05:11,891 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:05:11,891 >>   Batch size = 24
100% 84/84 [00:03<00:00, 23.80it/s]
INFO:__main__:***** Predict results None *****
[INFO|modelcard.py:444] 2023-02-14 23:05:15,585 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.8889999985694885}]}
  • full data
  • sequence length: 128
  • leakyRelu instad of relu
  • every other layer frozen
  • custom head
!python run_glue.py \
  --cache_dir roberta_gpt_cache/roberta_custom \
  --model_name_or_path roberta-base \
  --custom_model roberta_custom \
  --train_file data/train.json  \
  --validation_file data/valid.json \
  --test_file data/test.json \
  --per_device_train_batch_size 24 \
  --per_device_eval_batch_size 24 \
  --do_train \
  --do_eval \
  --do_predict \
  --max_seq_length 128 \
  --learning_rate 2e-5 \
  --num_train_epochs 1 \
  --output_dir roberta_gpt_out/roberta_custom  \
  --overwrite_output_dir
2023-02-15 16:41:47.706819: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-15 16:41:47.841368: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-02-15 16:41:48.597437: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-15 16:41:48.597583: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-15 16:41:48.597617: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:__main__:Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
INFO:__main__:Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=True,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=2e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=passive,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=roberta_gpt_out/roberta_custom/runs/Feb15_16-41-51_b7fb20e65b38,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
optim=adamw_hf,
optim_args=None,
output_dir=roberta_gpt_out/roberta_custom,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=24,
per_device_train_batch_size=24,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=roberta_gpt_out/roberta_custom,
save_on_each_node=False,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
INFO:__main__:load a local file for train: data/train.json
INFO:__main__:load a local file for validation: data/valid.json
INFO:__main__:load a local file for test: data/test.json
WARNING:datasets.builder:Using custom data configuration default-2fc7d0d25bce81a9
INFO:datasets.info:Loading Dataset Infos from /usr/local/lib/python3.8/dist-packages/datasets/packaged_modules/json
INFO:datasets.builder:Generating dataset json (/content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Downloading and preparing dataset json/default to /content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100% 3/3 [00:00<00:00, 13706.88it/s]
INFO:datasets.download.download_manager:Downloading took 0.0 min
INFO:datasets.download.download_manager:Checksum Computation took 0.0 min
Extracting data files: 100% 3/3 [00:00<00:00, 2053.35it/s]
INFO:datasets.utils.info_utils:Unable to verify checksums.
INFO:datasets.builder:Generating train split
INFO:datasets.builder:Generating validation split
INFO:datasets.builder:Generating test split
INFO:datasets.utils.info_utils:Unable to verify splits sizes.
Dataset json downloaded and prepared to /content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
100% 3/3 [00:00<00:00, 870.49it/s]
Downloading (…)lve/main/config.json: 100% 481/481 [00:00<00:00, 84.7kB/s]
[INFO|configuration_utils.py:660] 2023-02-15 16:41:53,771 >> loading configuration file config.json from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json
[INFO|configuration_utils.py:712] 2023-02-15 16:41:53,772 >> Model config RobertaConfig {
  "_name_or_path": "roberta-base",
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1",
    "2": "LABEL_2",
    "3": "LABEL_3",
    "4": "LABEL_4",
    "5": "LABEL_5"
  },
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1,
    "LABEL_2": 2,
    "LABEL_3": 3,
    "LABEL_4": 4,
    "LABEL_5": 5
  },
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.26.1",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 50265
}

[INFO|tokenization_auto.py:458] 2023-02-15 16:41:54,033 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:660] 2023-02-15 16:41:54,291 >> loading configuration file config.json from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json
[INFO|configuration_utils.py:712] 2023-02-15 16:41:54,292 >> Model config RobertaConfig {
  "_name_or_path": "roberta-base",
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.26.1",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 50265
}

Downloading (…)olve/main/vocab.json: 100% 899k/899k [00:00<00:00, 2.47MB/s]
Downloading (…)olve/main/merges.txt: 100% 456k/456k [00:00<00:00, 1.51MB/s]
Downloading (…)/main/tokenizer.json: 100% 1.36M/1.36M [00:00<00:00, 3.68MB/s]
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:41:57,754 >> loading file vocab.json from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/vocab.json
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:41:57,754 >> loading file merges.txt from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/merges.txt
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:41:57,754 >> loading file tokenizer.json from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/tokenizer.json
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:41:57,754 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:41:57,755 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:41:57,755 >> loading file tokenizer_config.json from cache at None
[INFO|configuration_utils.py:660] 2023-02-15 16:41:57,755 >> loading configuration file config.json from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json
[INFO|configuration_utils.py:712] 2023-02-15 16:41:57,755 >> Model config RobertaConfig {
  "_name_or_path": "roberta-base",
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.26.1",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 50265
}

INFO:__main__:Using hidden states in model: False
INFO:__main__:Using implementation from class: RobertaForSequenceClassificationCustomFIX
Downloading (…)"pytorch_model.bin";: 100% 501M/501M [00:05<00:00, 89.8MB/s]
[INFO|modeling_utils.py:2275] 2023-02-15 16:42:03,740 >> loading weights file pytorch_model.bin from cache at roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/pytorch_model.bin
[WARNING|modeling_utils.py:2847] 2023-02-15 16:42:07,510 >> Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForSequenceClassificationCustomFIX: ['lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'lm_head.dense.bias', 'roberta.pooler.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']
- This IS expected if you are initializing RobertaForSequenceClassificationCustomFIX from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassificationCustomFIX from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:2859] 2023-02-15 16:42:07,510 >> Some weights of RobertaForSequenceClassificationCustomFIX were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.dense_4.weight', 'classifier.out_proj.bias', 'classifier.dense_1_input.bias', 'classifier.dense_3.bias', 'classifier.dense_4.bias', 'classifier.dense_3.weight', 'classifier.dense_1_hidden.bias', 'classifier.dense_1_input.weight', 'classifier.dense_2.weight', 'classifier.out_proj.weight', 'classifier.dense_2.bias', 'classifier.dense_1_hidden.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.


Frozen layers:
[('roberta.encoder.layer.0.attention.self.query.weight', False), ('roberta.encoder.layer.0.attention.self.query.bias', False), ('roberta.encoder.layer.0.attention.self.key.weight', False), ('roberta.encoder.layer.0.attention.self.key.bias', False), ('roberta.encoder.layer.0.attention.self.value.weight', False), ('roberta.encoder.layer.0.attention.self.value.bias', False), ('roberta.encoder.layer.0.attention.output.dense.weight', False), ('roberta.encoder.layer.0.attention.output.dense.bias', False), ('roberta.encoder.layer.0.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.0.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.0.intermediate.dense.weight', False), ('roberta.encoder.layer.0.intermediate.dense.bias', False), ('roberta.encoder.layer.0.output.dense.weight', False), ('roberta.encoder.layer.0.output.dense.bias', False), ('roberta.encoder.layer.0.output.LayerNorm.weight', False), ('roberta.encoder.layer.0.output.LayerNorm.bias', False), ('roberta.encoder.layer.2.attention.self.query.weight', False), ('roberta.encoder.layer.2.attention.self.query.bias', False), ('roberta.encoder.layer.2.attention.self.key.weight', False), ('roberta.encoder.layer.2.attention.self.key.bias', False), ('roberta.encoder.layer.2.attention.self.value.weight', False), ('roberta.encoder.layer.2.attention.self.value.bias', False), ('roberta.encoder.layer.2.attention.output.dense.weight', False), ('roberta.encoder.layer.2.attention.output.dense.bias', False), ('roberta.encoder.layer.2.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.2.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.2.intermediate.dense.weight', False), ('roberta.encoder.layer.2.intermediate.dense.bias', False), ('roberta.encoder.layer.2.output.dense.weight', False), ('roberta.encoder.layer.2.output.dense.bias', False), ('roberta.encoder.layer.2.output.LayerNorm.weight', False), ('roberta.encoder.layer.2.output.LayerNorm.bias', False), ('roberta.encoder.layer.4.attention.self.query.weight', False), ('roberta.encoder.layer.4.attention.self.query.bias', False), ('roberta.encoder.layer.4.attention.self.key.weight', False), ('roberta.encoder.layer.4.attention.self.key.bias', False), ('roberta.encoder.layer.4.attention.self.value.weight', False), ('roberta.encoder.layer.4.attention.self.value.bias', False), ('roberta.encoder.layer.4.attention.output.dense.weight', False), ('roberta.encoder.layer.4.attention.output.dense.bias', False), ('roberta.encoder.layer.4.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.4.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.4.intermediate.dense.weight', False), ('roberta.encoder.layer.4.intermediate.dense.bias', False), ('roberta.encoder.layer.4.output.dense.weight', False), ('roberta.encoder.layer.4.output.dense.bias', False), ('roberta.encoder.layer.4.output.LayerNorm.weight', False), ('roberta.encoder.layer.4.output.LayerNorm.bias', False), ('roberta.encoder.layer.6.attention.self.query.weight', False), ('roberta.encoder.layer.6.attention.self.query.bias', False), ('roberta.encoder.layer.6.attention.self.key.weight', False), ('roberta.encoder.layer.6.attention.self.key.bias', False), ('roberta.encoder.layer.6.attention.self.value.weight', False), ('roberta.encoder.layer.6.attention.self.value.bias', False), ('roberta.encoder.layer.6.attention.output.dense.weight', False), ('roberta.encoder.layer.6.attention.output.dense.bias', False), ('roberta.encoder.layer.6.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.6.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.6.intermediate.dense.weight', False), ('roberta.encoder.layer.6.intermediate.dense.bias', False), ('roberta.encoder.layer.6.output.dense.weight', False), ('roberta.encoder.layer.6.output.dense.bias', False), ('roberta.encoder.layer.6.output.LayerNorm.weight', False), ('roberta.encoder.layer.6.output.LayerNorm.bias', False), ('roberta.encoder.layer.8.attention.self.query.weight', False), ('roberta.encoder.layer.8.attention.self.query.bias', False), ('roberta.encoder.layer.8.attention.self.key.weight', False), ('roberta.encoder.layer.8.attention.self.key.bias', False), ('roberta.encoder.layer.8.attention.self.value.weight', False), ('roberta.encoder.layer.8.attention.self.value.bias', False), ('roberta.encoder.layer.8.attention.output.dense.weight', False), ('roberta.encoder.layer.8.attention.output.dense.bias', False), ('roberta.encoder.layer.8.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.8.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.8.intermediate.dense.weight', False), ('roberta.encoder.layer.8.intermediate.dense.bias', False), ('roberta.encoder.layer.8.output.dense.weight', False), ('roberta.encoder.layer.8.output.dense.bias', False), ('roberta.encoder.layer.8.output.LayerNorm.weight', False), ('roberta.encoder.layer.8.output.LayerNorm.bias', False), ('roberta.encoder.layer.10.attention.self.query.weight', False), ('roberta.encoder.layer.10.attention.self.query.bias', False), ('roberta.encoder.layer.10.attention.self.key.weight', False), ('roberta.encoder.layer.10.attention.self.key.bias', False), ('roberta.encoder.layer.10.attention.self.value.weight', False), ('roberta.encoder.layer.10.attention.self.value.bias', False), ('roberta.encoder.layer.10.attention.output.dense.weight', False), ('roberta.encoder.layer.10.attention.output.dense.bias', False), ('roberta.encoder.layer.10.attention.output.LayerNorm.weight', False), ('roberta.encoder.layer.10.attention.output.LayerNorm.bias', False), ('roberta.encoder.layer.10.intermediate.dense.weight', False), ('roberta.encoder.layer.10.intermediate.dense.bias', False), ('roberta.encoder.layer.10.output.dense.weight', False), ('roberta.encoder.layer.10.output.dense.bias', False), ('roberta.encoder.layer.10.output.LayerNorm.weight', False), ('roberta.encoder.layer.10.output.LayerNorm.bias', False)] 


Running tokenizer on dataset:   0% 0/16 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-7f5564bde85bb8c0.arrow
Running tokenizer on dataset: 100% 16/16 [00:00<00:00, 16.55ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-138d491b60344550.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 20.08ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-de6cef99ad203f9d.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 20.58ba/s]
INFO:__main__:Sample 10476 of the training set: {'label': 0, 'text': 'i do find new friends i m going to try extra hard to make them stay and if i decide that i don t want to feel hurt again and just ride out the last year of school on my own i m going to have to try extra hard not to care what people think of me being a loner', 'input_ids': [0, 118, 109, 465, 92, 964, 939, 475, 164, 7, 860, 1823, 543, 7, 146, 106, 1095, 8, 114, 939, 2845, 14, 939, 218, 326, 236, 7, 619, 2581, 456, 8, 95, 3068, 66, 5, 94, 76, 9, 334, 15, 127, 308, 939, 475, 164, 7, 33, 7, 860, 1823, 543, 45, 7, 575, 99, 82, 206, 9, 162, 145, 10, 784, 9604, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 1824 of the training set: {'label': 1, 'text': 'i asked them to join me in creating a world where all year old girls could grow up feeling hopeful and powerful', 'input_ids': [0, 118, 553, 106, 7, 1962, 162, 11, 2351, 10, 232, 147, 70, 76, 793, 1972, 115, 1733, 62, 2157, 7917, 8, 2247, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 409 of the training set: {'label': 2, 'text': 'i feel when you are a caring person you attract other caring people into your life', 'input_ids': [0, 118, 619, 77, 47, 32, 10, 10837, 621, 47, 5696, 97, 10837, 82, 88, 110, 301, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
[INFO|trainer.py:710] 2023-02-15 16:42:10,962 >> The following columns in the training set don't have a corresponding argument in `RobertaForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(
[INFO|trainer.py:1650] 2023-02-15 16:42:10,969 >> ***** Running training *****
[INFO|trainer.py:1651] 2023-02-15 16:42:10,970 >>   Num examples = 16000
[INFO|trainer.py:1652] 2023-02-15 16:42:10,970 >>   Num Epochs = 1
[INFO|trainer.py:1653] 2023-02-15 16:42:10,970 >>   Instantaneous batch size per device = 24
[INFO|trainer.py:1654] 2023-02-15 16:42:10,970 >>   Total train batch size (w. parallel, distributed & accumulation) = 24
[INFO|trainer.py:1655] 2023-02-15 16:42:10,970 >>   Gradient Accumulation steps = 1
[INFO|trainer.py:1656] 2023-02-15 16:42:10,970 >>   Total optimization steps = 667
[INFO|trainer.py:1657] 2023-02-15 16:42:10,970 >>   Number of trainable parameters = 187723014
{'loss': 0.7753, 'learning_rate': 5.0074962518740634e-06, 'epoch': 0.75}
 75% 500/667 [01:00<00:19,  8.44it/s][INFO|trainer.py:2709] 2023-02-15 16:43:11,190 >> Saving model checkpoint to roberta_gpt_out/roberta_custom/checkpoint-500
[INFO|configuration_utils.py:453] 2023-02-15 16:43:11,191 >> Configuration saved in roberta_gpt_out/roberta_custom/checkpoint-500/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:43:12,662 >> Model weights saved in roberta_gpt_out/roberta_custom/checkpoint-500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:43:12,662 >> tokenizer config file saved in roberta_gpt_out/roberta_custom/checkpoint-500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:43:12,663 >> Special tokens file saved in roberta_gpt_out/roberta_custom/checkpoint-500/special_tokens_map.json
100% 666/667 [01:24<00:00,  8.45it/s][INFO|trainer.py:1901] 2023-02-15 16:43:35,735 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)


{'train_runtime': 84.7645, 'train_samples_per_second': 188.758, 'train_steps_per_second': 7.869, 'train_loss': 0.6754124654286626, 'epoch': 1.0}
100% 667/667 [01:24<00:00,  7.87it/s]
[INFO|trainer.py:2709] 2023-02-15 16:43:35,737 >> Saving model checkpoint to roberta_gpt_out/roberta_custom
[INFO|configuration_utils.py:453] 2023-02-15 16:43:35,738 >> Configuration saved in roberta_gpt_out/roberta_custom/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:43:37,210 >> Model weights saved in roberta_gpt_out/roberta_custom/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:43:37,211 >> tokenizer config file saved in roberta_gpt_out/roberta_custom/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:43:37,211 >> Special tokens file saved in roberta_gpt_out/roberta_custom/special_tokens_map.json
***** train metrics *****
  epoch                    =        1.0
  train_loss               =     0.6754
  train_runtime            = 0:01:24.76
  train_samples            =      16000
  train_samples_per_second =    188.758
  train_steps_per_second   =      7.869
INFO:__main__:*** Evaluate ***
[INFO|trainer.py:710] 2023-02-15 16:43:37,321 >> The following columns in the evaluation set don't have a corresponding argument in `RobertaForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:43:37,323 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:43:37,323 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:43:37,323 >>   Batch size = 24
100% 84/84 [00:03<00:00, 23.11it/s]
***** eval metrics *****
  epoch                   =        1.0
  eval_accuracy           =      0.902
  eval_loss               =     0.2834
  eval_runtime            = 0:00:03.67
  eval_samples            =       2000
  eval_samples_per_second =    543.728
  eval_steps_per_second   =     22.837
INFO:__main__:*** Predict ***
[INFO|trainer.py:710] 2023-02-15 16:43:41,005 >> The following columns in the test set don't have a corresponding argument in `RobertaForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:43:41,006 >> ***** Running Prediction *****
[INFO|trainer.py:2966] 2023-02-15 16:43:41,006 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:43:41,006 >>   Batch size = 24
100% 84/84 [00:03<00:00, 23.10it/s]
INFO:__main__:***** Predict results None *****
[INFO|modelcard.py:449] 2023-02-15 16:43:44,966 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.9020000100135803}]}

GPT2

  • full data
  • model GPT2
  • sequnece length: 128
  • training epoch: 1
!python run_glue.py \
  --cache_dir cache/gpt2 \
  --model_name_or_path gpt2 \
  --train_file data/train.json  \
  --validation_file data/valid.json \
  --test_file data/test.json  \
  --per_device_train_batch_size 24  \
  --per_device_eval_batch_size 24 \
  --do_train  \
  --do_eval \
  --do_predict  \
  --max_seq_length 128  \
  --learning_rate 2e-5  \
  --num_train_epochs 1  \
  --output_dir out/gpt2  \
  --overwrite_output_dir \
  --eval_steps 250 \
  --evaluation_strategy steps \
  --metric_for_best_model accuracy \
  --logging_steps 100 \
  --save_total_limit 5 \
  --max_steps 2500 \
  --load_best_model_at_end True 
2023-02-14 23:07:08.570069: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-14 23:07:08.727449: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-02-14 23:07:09.520905: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-14 23:07:09.521007: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-14 23:07:09.521044: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:__main__:Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
INFO:__main__:Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=True,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=250,
evaluation_strategy=steps,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=True,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=2e-05,
length_column_name=length,
load_best_model_at_end=True,
local_rank=-1,
log_level=passive,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=out/emotion/gpt2/runs/Feb14_23-07-11_fc0011e45a00,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=100,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=2500,
metric_for_best_model=accuracy,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
optim=adamw_hf,
output_dir=out/emotion/gpt2,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=24,
per_device_train_batch_size=24,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=out/emotion/gpt2,
save_on_each_node=False,
save_steps=500,
save_strategy=steps,
save_total_limit=5,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
INFO:__main__:load a local file for train: data/train.json
INFO:__main__:load a local file for validation: data/valid.json
INFO:__main__:load a local file for test: data/test.json
WARNING:datasets.builder:Using custom data configuration default-e1b3a7f886502404
INFO:datasets.info:Loading Dataset Infos from /usr/local/lib/python3.8/dist-packages/datasets/packaged_modules/json
INFO:datasets.builder:Generating dataset json (/content/cache/gpt2/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Downloading and preparing dataset json/default to /content/cache/gpt2/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100% 3/3 [00:00<00:00, 12932.08it/s]
INFO:datasets.download.download_manager:Downloading took 0.0 min
INFO:datasets.download.download_manager:Checksum Computation took 0.0 min
Extracting data files: 100% 3/3 [00:00<00:00, 2039.37it/s]
INFO:datasets.utils.info_utils:Unable to verify checksums.
INFO:datasets.builder:Generating train split
INFO:datasets.builder:Generating validation split
INFO:datasets.builder:Generating test split
INFO:datasets.utils.info_utils:Unable to verify splits sizes.
Dataset json downloaded and prepared to /content/cache/gpt2/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
100% 3/3 [00:00<00:00, 604.92it/s]
Downloading (…)lve/main/config.json: 100% 665/665 [00:00<00:00, 122kB/s]
[INFO|configuration_utils.py:653] 2023-02-14 23:07:12,924 >> loading configuration file config.json from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json
[INFO|configuration_utils.py:705] 2023-02-14 23:07:12,925 >> Model config GPT2Config {
  "_name_or_path": "gpt2",
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1",
    "2": "LABEL_2",
    "3": "LABEL_3",
    "4": "LABEL_4",
    "5": "LABEL_5"
  },
  "initializer_range": 0.02,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1,
    "LABEL_2": 2,
    "LABEL_3": 3,
    "LABEL_4": 4,
    "LABEL_5": 5
  },
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 12,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "transformers_version": "4.23.1",
  "use_cache": true,
  "vocab_size": 50257
}

[INFO|tokenization_auto.py:418] 2023-02-14 23:07:13,018 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:653] 2023-02-14 23:07:13,111 >> loading configuration file config.json from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json
[INFO|configuration_utils.py:705] 2023-02-14 23:07:13,111 >> Model config GPT2Config {
  "_name_or_path": "gpt2",
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "initializer_range": 0.02,
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 12,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "transformers_version": "4.23.1",
  "use_cache": true,
  "vocab_size": 50257
}

Downloading (…)olve/main/vocab.json: 100% 1.04M/1.04M [00:00<00:00, 9.08MB/s]
Downloading (…)olve/main/merges.txt: 100% 456k/456k [00:00<00:00, 5.01MB/s]
Downloading (…)/main/tokenizer.json: 100% 1.36M/1.36M [00:00<00:00, 12.0MB/s]
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:07:14,433 >> loading file vocab.json from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/vocab.json
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:07:14,433 >> loading file merges.txt from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/merges.txt
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:07:14,434 >> loading file tokenizer.json from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/tokenizer.json
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:07:14,434 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:07:14,434 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1773] 2023-02-14 23:07:14,434 >> loading file tokenizer_config.json from cache at None
[INFO|configuration_utils.py:653] 2023-02-14 23:07:14,434 >> loading configuration file config.json from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json
[INFO|configuration_utils.py:705] 2023-02-14 23:07:14,435 >> Model config GPT2Config {
  "_name_or_path": "gpt2",
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "initializer_range": 0.02,
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 12,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "transformers_version": "4.23.1",
  "use_cache": true,
  "vocab_size": 50257
}

INFO:__main__:Using implementation from class: AutoModelForSequenceClassification
Downloading (…)"pytorch_model.bin";: 100% 548M/548M [00:05<00:00, 107MB/s]
[INFO|modeling_utils.py:2156] 2023-02-14 23:07:19,840 >> loading weights file pytorch_model.bin from cache at cache/gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/pytorch_model.bin
[INFO|modeling_utils.py:2606] 2023-02-14 23:07:21,267 >> All model checkpoint weights were used when initializing GPT2ForSequenceClassification.

[WARNING|modeling_utils.py:2608] 2023-02-14 23:07:21,267 >> Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[ERROR|tokenization_utils_base.py:1019] 2023-02-14 23:07:21,277 >> Using pad_token, but it is not set yet.
INFO:__main__:Set PAD token to EOS: <|endoftext|>
Running tokenizer on dataset:   0% 0/16 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/cache/gpt2/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-cf43a94824d333a0.arrow
Running tokenizer on dataset: 100% 16/16 [00:00<00:00, 19.87ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/cache/gpt2/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-742ee95b271da600.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 20.69ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/cache/gpt2/json/default-e1b3a7f886502404/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-9e616ad6e069d412.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 20.84ba/s]
INFO:__main__:Sample 10476 of the training set: {'label': 0, 'text': 'i do find new friends i m going to try extra hard to make them stay and if i decide that i don t want to feel hurt again and just ride out the last year of school on my own i m going to have to try extra hard not to care what people think of me being a loner', 'input_ids': [72, 466, 1064, 649, 2460, 1312, 285, 1016, 284, 1949, 3131, 1327, 284, 787, 606, 2652, 290, 611, 1312, 5409, 326, 1312, 836, 256, 765, 284, 1254, 5938, 757, 290, 655, 6594, 503, 262, 938, 614, 286, 1524, 319, 616, 898, 1312, 285, 1016, 284, 423, 284, 1949, 3131, 1327, 407, 284, 1337, 644, 661, 892, 286, 502, 852, 257, 300, 14491, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 1824 of the training set: {'label': 1, 'text': 'i asked them to join me in creating a world where all year old girls could grow up feeling hopeful and powerful', 'input_ids': [72, 1965, 606, 284, 4654, 502, 287, 4441, 257, 995, 810, 477, 614, 1468, 4813, 714, 1663, 510, 4203, 17836, 290, 3665, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 409 of the training set: {'label': 2, 'text': 'i feel when you are a caring person you attract other caring people into your life', 'input_ids': [72, 1254, 618, 345, 389, 257, 18088, 1048, 345, 4729, 584, 18088, 661, 656, 534, 1204, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
[INFO|trainer.py:503] 2023-02-14 23:07:24,803 >> max_steps is given, it will override any value given in num_train_epochs
[INFO|trainer.py:725] 2023-02-14 23:07:24,804 >> The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(
[INFO|trainer.py:1607] 2023-02-14 23:07:24,810 >> ***** Running training *****
[INFO|trainer.py:1608] 2023-02-14 23:07:24,810 >>   Num examples = 16000
[INFO|trainer.py:1609] 2023-02-14 23:07:24,810 >>   Num Epochs = 4
[INFO|trainer.py:1610] 2023-02-14 23:07:24,810 >>   Instantaneous batch size per device = 24
[INFO|trainer.py:1611] 2023-02-14 23:07:24,810 >>   Total train batch size (w. parallel, distributed & accumulation) = 24
[INFO|trainer.py:1612] 2023-02-14 23:07:24,810 >>   Gradient Accumulation steps = 1
[INFO|trainer.py:1613] 2023-02-14 23:07:24,810 >>   Total optimization steps = 2500
{'loss': 2.3442, 'learning_rate': 1.9200000000000003e-05, 'epoch': 0.15}
{'loss': 1.3126, 'learning_rate': 1.8400000000000003e-05, 'epoch': 0.3}
 10% 250/2500 [00:37<05:29,  6.83it/s][INFO|trainer.py:725] 2023-02-14 23:08:02,490 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:08:02,492 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:08:02,492 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:08:02,492 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 27.92it/s]
  8% 7/84 [00:00<00:03, 23.90it/s]
 12% 10/84 [00:00<00:03, 22.68it/s]
 15% 13/84 [00:00<00:03, 22.09it/s]
 19% 16/84 [00:00<00:03, 21.73it/s]
 23% 19/84 [00:00<00:03, 21.53it/s]
 26% 22/84 [00:00<00:02, 21.41it/s]
 30% 25/84 [00:01<00:02, 21.30it/s]
 33% 28/84 [00:01<00:02, 21.25it/s]
 37% 31/84 [00:01<00:02, 21.24it/s]
 40% 34/84 [00:01<00:02, 21.23it/s]
 44% 37/84 [00:01<00:02, 21.21it/s]
 48% 40/84 [00:01<00:02, 21.21it/s]
 51% 43/84 [00:01<00:01, 21.19it/s]
 55% 46/84 [00:02<00:01, 21.14it/s]
 58% 49/84 [00:02<00:01, 21.15it/s]
 62% 52/84 [00:02<00:01, 21.16it/s]
 65% 55/84 [00:02<00:01, 21.17it/s]
 69% 58/84 [00:02<00:01, 21.18it/s]
 73% 61/84 [00:02<00:01, 21.18it/s]
 76% 64/84 [00:02<00:00, 21.18it/s]
 80% 67/84 [00:03<00:00, 21.10it/s]
 83% 70/84 [00:03<00:00, 21.12it/s]
 87% 73/84 [00:03<00:00, 21.13it/s]
 90% 76/84 [00:03<00:00, 21.14it/s]
 94% 79/84 [00:03<00:00, 21.17it/s]
 98% 82/84 [00:03<00:00, 21.18it/s]
{'eval_loss': 0.7983964085578918, 'eval_accuracy': 0.7465000152587891, 'eval_runtime': 3.9478, 'eval_samples_per_second': 506.612, 'eval_steps_per_second': 21.278, 'epoch': 0.37}

 10% 250/2500 [00:41<05:29,  6.83it/s]
{'loss': 0.7216, 'learning_rate': 1.76e-05, 'epoch': 0.45}
{'loss': 0.5032, 'learning_rate': 1.6800000000000002e-05, 'epoch': 0.6}
{'loss': 0.3904, 'learning_rate': 1.6000000000000003e-05, 'epoch': 0.75}
 20% 500/2500 [01:18<04:53,  6.82it/s][INFO|trainer.py:725] 2023-02-14 23:08:43,155 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:08:43,157 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:08:43,157 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:08:43,157 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.21it/s]
  8% 7/84 [00:00<00:03, 24.02it/s]
 12% 10/84 [00:00<00:03, 22.66it/s]
 15% 13/84 [00:00<00:03, 21.98it/s]
 19% 16/84 [00:00<00:03, 21.63it/s]
 23% 19/84 [00:00<00:03, 21.38it/s]
 26% 22/84 [00:00<00:02, 21.30it/s]
 30% 25/84 [00:01<00:02, 21.19it/s]
 33% 28/84 [00:01<00:02, 21.17it/s]
 37% 31/84 [00:01<00:02, 20.95it/s]
 40% 34/84 [00:01<00:02, 20.96it/s]
 44% 37/84 [00:01<00:02, 20.95it/s]
 48% 40/84 [00:01<00:02, 20.95it/s]
 51% 43/84 [00:02<00:01, 20.99it/s]
 55% 46/84 [00:02<00:01, 21.01it/s]
 58% 49/84 [00:02<00:01, 21.02it/s]
 62% 52/84 [00:02<00:01, 21.02it/s]
 65% 55/84 [00:02<00:01, 21.02it/s]
 69% 58/84 [00:02<00:01, 21.05it/s]
 73% 61/84 [00:02<00:01, 21.06it/s]
 76% 64/84 [00:02<00:00, 21.10it/s]
 80% 67/84 [00:03<00:00, 21.11it/s]
 83% 70/84 [00:03<00:00, 21.13it/s]
 87% 73/84 [00:03<00:00, 21.11it/s]
 90% 76/84 [00:03<00:00, 21.08it/s]
 94% 79/84 [00:03<00:00, 21.08it/s]
 98% 82/84 [00:03<00:00, 21.11it/s]
{'eval_loss': 0.29131895303726196, 'eval_accuracy': 0.9035000205039978, 'eval_runtime': 3.9682, 'eval_samples_per_second': 504.003, 'eval_steps_per_second': 21.168, 'epoch': 0.75}

 20% 500/2500 [01:22<04:53,  6.82it/s]
                                   [INFO|trainer.py:2656] 2023-02-14 23:08:47,127 >> Saving model checkpoint to out/emotion/gpt2/checkpoint-500
[INFO|configuration_utils.py:447] 2023-02-14 23:08:47,128 >> Configuration saved in out/emotion/gpt2/checkpoint-500/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:08:47,844 >> Model weights saved in out/emotion/gpt2/checkpoint-500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:08:47,845 >> tokenizer config file saved in out/emotion/gpt2/checkpoint-500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:08:47,845 >> Special tokens file saved in out/emotion/gpt2/checkpoint-500/special_tokens_map.json
{'loss': 0.3554, 'learning_rate': 1.5200000000000002e-05, 'epoch': 0.9}
{'loss': 0.2871, 'learning_rate': 1.4400000000000001e-05, 'epoch': 1.05}
 30% 750/2500 [02:01<04:16,  6.82it/s][INFO|trainer.py:725] 2023-02-14 23:09:26,109 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:09:26,111 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:09:26,111 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:09:26,111 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.28it/s]
  8% 7/84 [00:00<00:03, 24.09it/s]
 12% 10/84 [00:00<00:03, 22.54it/s]
 15% 13/84 [00:00<00:03, 21.98it/s]
 19% 16/84 [00:00<00:03, 21.65it/s]
 23% 19/84 [00:00<00:03, 21.44it/s]
 26% 22/84 [00:00<00:02, 21.34it/s]
 30% 25/84 [00:01<00:02, 21.27it/s]
 33% 28/84 [00:01<00:02, 21.21it/s]
 37% 31/84 [00:01<00:02, 21.18it/s]
 40% 34/84 [00:01<00:02, 21.16it/s]
 44% 37/84 [00:01<00:02, 21.13it/s]
 48% 40/84 [00:01<00:02, 21.11it/s]
 51% 43/84 [00:01<00:01, 21.10it/s]
 55% 46/84 [00:02<00:01, 21.12it/s]
 58% 49/84 [00:02<00:01, 21.11it/s]
 62% 52/84 [00:02<00:01, 21.04it/s]
 65% 55/84 [00:02<00:01, 21.06it/s]
 69% 58/84 [00:02<00:01, 21.05it/s]
 73% 61/84 [00:02<00:01, 21.06it/s]
 76% 64/84 [00:02<00:00, 21.07it/s]
 80% 67/84 [00:03<00:00, 21.10it/s]
 83% 70/84 [00:03<00:00, 21.11it/s]
 87% 73/84 [00:03<00:00, 21.13it/s]
 90% 76/84 [00:03<00:00, 21.15it/s]
 94% 79/84 [00:03<00:00, 21.12it/s]
 98% 82/84 [00:03<00:00, 21.12it/s]
{'eval_loss': 0.2168988287448883, 'eval_accuracy': 0.9235000014305115, 'eval_runtime': 3.9585, 'eval_samples_per_second': 505.237, 'eval_steps_per_second': 21.22, 'epoch': 1.12}

 30% 750/2500 [02:05<04:16,  6.82it/s]
{'loss': 0.2285, 'learning_rate': 1.3600000000000002e-05, 'epoch': 1.2}
{'loss': 0.1888, 'learning_rate': 1.2800000000000001e-05, 'epoch': 1.35}
{'loss': 0.2106, 'learning_rate': 1.2e-05, 'epoch': 1.5}
 40% 1000/2500 [02:41<03:39,  6.82it/s][INFO|trainer.py:725] 2023-02-14 23:10:06,735 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:10:06,737 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:10:06,737 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:10:06,737 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.21it/s]
  8% 7/84 [00:00<00:03, 24.04it/s]
 12% 10/84 [00:00<00:03, 22.74it/s]
 15% 13/84 [00:00<00:03, 22.13it/s]
 19% 16/84 [00:00<00:03, 21.78it/s]
 23% 19/84 [00:00<00:03, 21.58it/s]
 26% 22/84 [00:00<00:02, 21.42it/s]
 30% 25/84 [00:01<00:02, 21.34it/s]
 33% 28/84 [00:01<00:02, 21.28it/s]
 37% 31/84 [00:01<00:02, 21.23it/s]
 40% 34/84 [00:01<00:02, 21.17it/s]
 44% 37/84 [00:01<00:02, 21.09it/s]
 48% 40/84 [00:01<00:02, 20.95it/s]
 51% 43/84 [00:01<00:01, 20.98it/s]
 55% 46/84 [00:02<00:01, 21.03it/s]
 58% 49/84 [00:02<00:01, 21.06it/s]
 62% 52/84 [00:02<00:01, 21.08it/s]
 65% 55/84 [00:02<00:01, 21.07it/s]
 69% 58/84 [00:02<00:01, 21.10it/s]
 73% 61/84 [00:02<00:01, 21.04it/s]
 76% 64/84 [00:02<00:00, 21.08it/s]
 80% 67/84 [00:03<00:00, 21.07it/s]
 83% 70/84 [00:03<00:00, 21.08it/s]
 87% 73/84 [00:03<00:00, 21.10it/s]
 90% 76/84 [00:03<00:00, 21.07it/s]
 94% 79/84 [00:03<00:00, 21.09it/s]
 98% 82/84 [00:03<00:00, 21.08it/s]
{'eval_loss': 0.19490236043930054, 'eval_accuracy': 0.9259999990463257, 'eval_runtime': 3.9583, 'eval_samples_per_second': 505.269, 'eval_steps_per_second': 21.221, 'epoch': 1.5}

 40% 1000/2500 [02:45<03:39,  6.82it/s]
                                   [INFO|trainer.py:2656] 2023-02-14 23:10:10,696 >> Saving model checkpoint to out/emotion/gpt2/checkpoint-1000
[INFO|configuration_utils.py:447] 2023-02-14 23:10:10,697 >> Configuration saved in out/emotion/gpt2/checkpoint-1000/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:10:11,400 >> Model weights saved in out/emotion/gpt2/checkpoint-1000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:10:11,401 >> tokenizer config file saved in out/emotion/gpt2/checkpoint-1000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:10:11,401 >> Special tokens file saved in out/emotion/gpt2/checkpoint-1000/special_tokens_map.json
{'loss': 0.1906, 'learning_rate': 1.1200000000000001e-05, 'epoch': 1.65}
{'loss': 0.1793, 'learning_rate': 1.04e-05, 'epoch': 1.8}
 50% 1250/2500 [03:24<03:03,  6.83it/s][INFO|trainer.py:725] 2023-02-14 23:10:49,631 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:10:49,633 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:10:49,633 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:10:49,633 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 27.62it/s]
  8% 7/84 [00:00<00:03, 23.92it/s]
 12% 10/84 [00:00<00:03, 22.65it/s]
 15% 13/84 [00:00<00:03, 22.07it/s]
 19% 16/84 [00:00<00:03, 21.74it/s]
 23% 19/84 [00:00<00:03, 21.56it/s]
 26% 22/84 [00:00<00:02, 21.42it/s]
 30% 25/84 [00:01<00:02, 21.33it/s]
 33% 28/84 [00:01<00:02, 21.28it/s]
 37% 31/84 [00:01<00:02, 21.24it/s]
 40% 34/84 [00:01<00:02, 21.21it/s]
 44% 37/84 [00:01<00:02, 21.19it/s]
 48% 40/84 [00:01<00:02, 21.08it/s]
 51% 43/84 [00:01<00:01, 21.11it/s]
 55% 46/84 [00:02<00:01, 21.11it/s]
 58% 49/84 [00:02<00:01, 21.08it/s]
 62% 52/84 [00:02<00:01, 21.10it/s]
 65% 55/84 [00:02<00:01, 21.12it/s]
 69% 58/84 [00:02<00:01, 21.12it/s]
 73% 61/84 [00:02<00:01, 21.13it/s]
 76% 64/84 [00:02<00:00, 21.12it/s]
 80% 67/84 [00:03<00:00, 21.13it/s]
 83% 70/84 [00:03<00:00, 21.12it/s]
 87% 73/84 [00:03<00:00, 21.12it/s]
 90% 76/84 [00:03<00:00, 21.06it/s]
 94% 79/84 [00:03<00:00, 21.05it/s]
 98% 82/84 [00:03<00:00, 21.07it/s]
{'eval_loss': 0.1607103943824768, 'eval_accuracy': 0.9319999814033508, 'eval_runtime': 3.9556, 'eval_samples_per_second': 505.611, 'eval_steps_per_second': 21.236, 'epoch': 1.87}

 50% 1250/2500 [03:28<03:03,  6.83it/s]
{'loss': 0.2116, 'learning_rate': 9.600000000000001e-06, 'epoch': 1.95}
{'loss': 0.1536, 'learning_rate': 8.8e-06, 'epoch': 2.1}
{'loss': 0.1518, 'learning_rate': 8.000000000000001e-06, 'epoch': 2.25}
 60% 1500/2500 [04:05<02:26,  6.84it/s][INFO|trainer.py:725] 2023-02-14 23:11:30,201 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:11:30,203 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:11:30,203 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:11:30,203 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.19it/s]
  8% 7/84 [00:00<00:03, 24.04it/s]
 12% 10/84 [00:00<00:03, 22.73it/s]
 15% 13/84 [00:00<00:03, 22.14it/s]
 19% 16/84 [00:00<00:03, 21.77it/s]
 23% 19/84 [00:00<00:03, 21.58it/s]
 26% 22/84 [00:00<00:02, 21.44it/s]
 30% 25/84 [00:01<00:02, 21.35it/s]
 33% 28/84 [00:01<00:02, 21.29it/s]
 37% 31/84 [00:01<00:02, 21.26it/s]
 40% 34/84 [00:01<00:02, 21.22it/s]
 44% 37/84 [00:01<00:02, 21.22it/s]
 48% 40/84 [00:01<00:02, 21.20it/s]
 51% 43/84 [00:01<00:01, 21.21it/s]
 55% 46/84 [00:02<00:01, 21.19it/s]
 58% 49/84 [00:02<00:01, 21.09it/s]
 62% 52/84 [00:02<00:01, 21.11it/s]
 65% 55/84 [00:02<00:01, 21.14it/s]
 69% 58/84 [00:02<00:01, 21.16it/s]
 73% 61/84 [00:02<00:01, 21.17it/s]
 76% 64/84 [00:02<00:00, 21.18it/s]
 80% 67/84 [00:03<00:00, 21.19it/s]
 83% 70/84 [00:03<00:00, 21.15it/s]
 87% 73/84 [00:03<00:00, 21.14it/s]
 90% 76/84 [00:03<00:00, 21.14it/s]
 94% 79/84 [00:03<00:00, 21.16it/s]
 98% 82/84 [00:03<00:00, 21.16it/s]
{'eval_loss': 0.160899356007576, 'eval_accuracy': 0.9330000281333923, 'eval_runtime': 3.9453, 'eval_samples_per_second': 506.928, 'eval_steps_per_second': 21.291, 'epoch': 2.25}

 60% 1500/2500 [04:09<02:26,  6.84it/s]
                                   [INFO|trainer.py:2656] 2023-02-14 23:11:34,149 >> Saving model checkpoint to out/emotion/gpt2/checkpoint-1500
[INFO|configuration_utils.py:447] 2023-02-14 23:11:34,150 >> Configuration saved in out/emotion/gpt2/checkpoint-1500/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:11:34,855 >> Model weights saved in out/emotion/gpt2/checkpoint-1500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:11:34,856 >> tokenizer config file saved in out/emotion/gpt2/checkpoint-1500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:11:34,856 >> Special tokens file saved in out/emotion/gpt2/checkpoint-1500/special_tokens_map.json
{'loss': 0.157, 'learning_rate': 7.2000000000000005e-06, 'epoch': 2.4}
{'loss': 0.141, 'learning_rate': 6.4000000000000006e-06, 'epoch': 2.55}
 70% 1750/2500 [04:48<01:49,  6.83it/s][INFO|trainer.py:725] 2023-02-14 23:12:13,242 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:12:13,244 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:12:13,244 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:12:13,244 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.13it/s]
  8% 7/84 [00:00<00:03, 24.00it/s]
 12% 10/84 [00:00<00:03, 22.68it/s]
 15% 13/84 [00:00<00:03, 22.01it/s]
 19% 16/84 [00:00<00:03, 21.66it/s]
 23% 19/84 [00:00<00:03, 21.47it/s]
 26% 22/84 [00:00<00:02, 21.35it/s]
 30% 25/84 [00:01<00:02, 21.28it/s]
 33% 28/84 [00:01<00:02, 21.11it/s]
 37% 31/84 [00:01<00:02, 21.10it/s]
 40% 34/84 [00:01<00:02, 21.11it/s]
 44% 37/84 [00:01<00:02, 21.10it/s]
 48% 40/84 [00:01<00:02, 21.12it/s]
 51% 43/84 [00:01<00:01, 21.13it/s]
 55% 46/84 [00:02<00:01, 21.12it/s]
 58% 49/84 [00:02<00:01, 21.12it/s]
 62% 52/84 [00:02<00:01, 21.11it/s]
 65% 55/84 [00:02<00:01, 21.11it/s]
 69% 58/84 [00:02<00:01, 21.13it/s]
 73% 61/84 [00:02<00:01, 21.14it/s]
 76% 64/84 [00:02<00:00, 21.15it/s]
 80% 67/84 [00:03<00:00, 21.15it/s]
 83% 70/84 [00:03<00:00, 21.07it/s]
 87% 73/84 [00:03<00:00, 21.08it/s]
 90% 76/84 [00:03<00:00, 21.05it/s]
 94% 79/84 [00:03<00:00, 21.09it/s]
 98% 82/84 [00:03<00:00, 21.11it/s]
{'eval_loss': 0.15204769372940063, 'eval_accuracy': 0.9319999814033508, 'eval_runtime': 3.9593, 'eval_samples_per_second': 505.142, 'eval_steps_per_second': 21.216, 'epoch': 2.62}

 70% 1750/2500 [04:52<01:49,  6.83it/s]
{'loss': 0.1426, 'learning_rate': 5.600000000000001e-06, 'epoch': 2.7}
{'loss': 0.1463, 'learning_rate': 4.800000000000001e-06, 'epoch': 2.85}
{'loss': 0.1403, 'learning_rate': 4.000000000000001e-06, 'epoch': 3.0}
 80% 2000/2500 [05:29<01:13,  6.79it/s][INFO|trainer.py:725] 2023-02-14 23:12:53,905 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:12:53,907 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:12:53,907 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:12:53,907 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.19it/s]
  8% 7/84 [00:00<00:03, 23.99it/s]
 12% 10/84 [00:00<00:03, 22.40it/s]
 15% 13/84 [00:00<00:03, 21.92it/s]
 19% 16/84 [00:00<00:03, 21.62it/s]
 23% 19/84 [00:00<00:03, 21.45it/s]
 26% 22/84 [00:00<00:02, 21.36it/s]
 30% 25/84 [00:01<00:02, 21.31it/s]
 33% 28/84 [00:01<00:02, 21.28it/s]
 37% 31/84 [00:01<00:02, 21.26it/s]
 40% 34/84 [00:01<00:02, 21.24it/s]
 44% 37/84 [00:01<00:02, 21.21it/s]
 48% 40/84 [00:01<00:02, 21.15it/s]
 51% 43/84 [00:01<00:01, 21.12it/s]
 55% 46/84 [00:02<00:01, 21.13it/s]
 58% 49/84 [00:02<00:01, 21.15it/s]
 62% 52/84 [00:02<00:01, 21.16it/s]
 65% 55/84 [00:02<00:01, 21.11it/s]
 69% 58/84 [00:02<00:01, 21.11it/s]
 73% 61/84 [00:02<00:01, 21.11it/s]
 76% 64/84 [00:02<00:00, 21.08it/s]
 80% 67/84 [00:03<00:00, 21.12it/s]
 83% 70/84 [00:03<00:00, 21.13it/s]
 87% 73/84 [00:03<00:00, 21.16it/s]
 90% 76/84 [00:03<00:00, 21.09it/s]
 94% 79/84 [00:03<00:00, 21.13it/s]
 98% 82/84 [00:03<00:00, 21.13it/s]
{'eval_loss': 0.14609387516975403, 'eval_accuracy': 0.9290000200271606, 'eval_runtime': 3.9539, 'eval_samples_per_second': 505.831, 'eval_steps_per_second': 21.245, 'epoch': 3.0}

 80% 2000/2500 [05:33<01:13,  6.79it/s]
                                   [INFO|trainer.py:2656] 2023-02-14 23:12:57,862 >> Saving model checkpoint to out/emotion/gpt2/checkpoint-2000
[INFO|configuration_utils.py:447] 2023-02-14 23:12:57,863 >> Configuration saved in out/emotion/gpt2/checkpoint-2000/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:12:58,567 >> Model weights saved in out/emotion/gpt2/checkpoint-2000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:12:58,568 >> tokenizer config file saved in out/emotion/gpt2/checkpoint-2000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:12:58,568 >> Special tokens file saved in out/emotion/gpt2/checkpoint-2000/special_tokens_map.json
{'loss': 0.1256, 'learning_rate': 3.2000000000000003e-06, 'epoch': 3.15}
{'loss': 0.1246, 'learning_rate': 2.4000000000000003e-06, 'epoch': 3.3}
 90% 2250/2500 [06:12<00:36,  6.83it/s][INFO|trainer.py:725] 2023-02-14 23:13:36,839 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:13:36,841 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:13:36,841 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:13:36,842 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.26it/s]
  8% 7/84 [00:00<00:03, 24.11it/s]
 12% 10/84 [00:00<00:03, 22.76it/s]
 15% 13/84 [00:00<00:03, 22.15it/s]
 19% 16/84 [00:00<00:03, 21.80it/s]
 23% 19/84 [00:00<00:03, 21.61it/s]
 26% 22/84 [00:00<00:02, 21.43it/s]
 30% 25/84 [00:01<00:02, 21.33it/s]
 33% 28/84 [00:01<00:02, 21.27it/s]
 37% 31/84 [00:01<00:02, 21.25it/s]
 40% 34/84 [00:01<00:02, 21.24it/s]
 44% 37/84 [00:01<00:02, 21.21it/s]
 48% 40/84 [00:01<00:02, 21.21it/s]
 51% 43/84 [00:01<00:01, 21.10it/s]
 55% 46/84 [00:02<00:01, 21.12it/s]
 58% 49/84 [00:02<00:01, 21.15it/s]
 62% 52/84 [00:02<00:01, 21.11it/s]
 65% 55/84 [00:02<00:01, 21.08it/s]
 69% 58/84 [00:02<00:01, 21.10it/s]
 73% 61/84 [00:02<00:01, 21.13it/s]
 76% 64/84 [00:02<00:00, 21.11it/s]
 80% 67/84 [00:03<00:00, 21.11it/s]
 83% 70/84 [00:03<00:00, 21.14it/s]
 87% 73/84 [00:03<00:00, 21.16it/s]
 90% 76/84 [00:03<00:00, 21.17it/s]
 94% 79/84 [00:03<00:00, 21.17it/s]
 98% 82/84 [00:03<00:00, 21.19it/s]
{'eval_loss': 0.15553689002990723, 'eval_accuracy': 0.9294999837875366, 'eval_runtime': 3.9468, 'eval_samples_per_second': 506.742, 'eval_steps_per_second': 21.283, 'epoch': 3.37}

 90% 2250/2500 [06:15<00:36,  6.83it/s]
{'loss': 0.1174, 'learning_rate': 1.6000000000000001e-06, 'epoch': 3.45}
{'loss': 0.1374, 'learning_rate': 8.000000000000001e-07, 'epoch': 3.6}
{'loss': 0.1207, 'learning_rate': 0.0, 'epoch': 3.75}
100% 2500/2500 [06:52<00:00,  6.78it/s][INFO|trainer.py:725] 2023-02-14 23:14:17,492 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:14:17,494 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:14:17,494 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:14:17,494 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  5% 4/84 [00:00<00:02, 28.12it/s]
  8% 7/84 [00:00<00:03, 23.96it/s]
 12% 10/84 [00:00<00:03, 22.65it/s]
 15% 13/84 [00:00<00:03, 22.05it/s]
 19% 16/84 [00:00<00:03, 21.68it/s]
 23% 19/84 [00:00<00:03, 21.31it/s]
 26% 22/84 [00:00<00:02, 21.26it/s]
 30% 25/84 [00:01<00:02, 21.23it/s]
 33% 28/84 [00:01<00:02, 21.18it/s]
 37% 31/84 [00:01<00:02, 21.16it/s]
 40% 34/84 [00:01<00:02, 21.14it/s]
 44% 37/84 [00:01<00:02, 21.14it/s]
 48% 40/84 [00:01<00:02, 21.11it/s]
 51% 43/84 [00:01<00:01, 21.13it/s]
 55% 46/84 [00:02<00:01, 21.13it/s]
 58% 49/84 [00:02<00:01, 21.15it/s]
 62% 52/84 [00:02<00:01, 21.16it/s]
 65% 55/84 [00:02<00:01, 21.16it/s]
 69% 58/84 [00:02<00:01, 21.17it/s]
 73% 61/84 [00:02<00:01, 21.18it/s]
 76% 64/84 [00:02<00:00, 21.16it/s]
 80% 67/84 [00:03<00:00, 21.18it/s]
 83% 70/84 [00:03<00:00, 21.12it/s]
 87% 73/84 [00:03<00:00, 21.13it/s]
 90% 76/84 [00:03<00:00, 21.14it/s]
 94% 79/84 [00:03<00:00, 21.13it/s]
 98% 82/84 [00:03<00:00, 21.13it/s]
{'eval_loss': 0.15162073075771332, 'eval_accuracy': 0.9309999942779541, 'eval_runtime': 3.9562, 'eval_samples_per_second': 505.538, 'eval_steps_per_second': 21.233, 'epoch': 3.75}

100% 2500/2500 [06:56<00:00,  6.78it/s]
                                   [INFO|trainer.py:2656] 2023-02-14 23:14:21,452 >> Saving model checkpoint to out/emotion/gpt2/checkpoint-2500
[INFO|configuration_utils.py:447] 2023-02-14 23:14:21,452 >> Configuration saved in out/emotion/gpt2/checkpoint-2500/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:14:22,156 >> Model weights saved in out/emotion/gpt2/checkpoint-2500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:14:22,157 >> tokenizer config file saved in out/emotion/gpt2/checkpoint-2500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:14:22,157 >> Special tokens file saved in out/emotion/gpt2/checkpoint-2500/special_tokens_map.json
[INFO|trainer.py:1852] 2023-02-14 23:14:23,723 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)


[INFO|trainer.py:1946] 2023-02-14 23:14:23,723 >> Loading best model from out/emotion/gpt2/checkpoint-1500 (score: 0.9330000281333923).
{'train_runtime': 419.344, 'train_samples_per_second': 143.081, 'train_steps_per_second': 5.962, 'train_loss': 0.351297896194458, 'epoch': 3.75}
100% 2500/2500 [06:59<00:00,  5.96it/s]
[INFO|trainer.py:2656] 2023-02-14 23:14:24,156 >> Saving model checkpoint to out/emotion/gpt2
[INFO|configuration_utils.py:447] 2023-02-14 23:14:24,157 >> Configuration saved in out/emotion/gpt2/config.json
[INFO|modeling_utils.py:1624] 2023-02-14 23:14:25,141 >> Model weights saved in out/emotion/gpt2/pytorch_model.bin
[INFO|tokenization_utils_base.py:2123] 2023-02-14 23:14:25,142 >> tokenizer config file saved in out/emotion/gpt2/tokenizer_config.json
[INFO|tokenization_utils_base.py:2130] 2023-02-14 23:14:25,142 >> Special tokens file saved in out/emotion/gpt2/special_tokens_map.json
***** train metrics *****
  epoch                    =       3.75
  train_loss               =     0.3513
  train_runtime            = 0:06:59.34
  train_samples            =      16000
  train_samples_per_second =    143.081
  train_steps_per_second   =      5.962
INFO:__main__:*** Evaluate ***
[INFO|trainer.py:725] 2023-02-14 23:14:25,244 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:14:25,246 >> ***** Running Evaluation *****
[INFO|trainer.py:2909] 2023-02-14 23:14:25,246 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:14:25,246 >>   Batch size = 24
100% 84/84 [00:03<00:00, 21.33it/s]
***** eval metrics *****
  epoch                   =       3.75
  eval_accuracy           =      0.933
  eval_loss               =     0.1609
  eval_runtime            = 0:00:03.99
  eval_samples            =       2000
  eval_samples_per_second =    500.658
  eval_steps_per_second   =     21.028
INFO:__main__:*** Predict ***
[INFO|trainer.py:725] 2023-02-14 23:14:29,244 >> The following columns in the test set don't have a corresponding argument in `GPT2ForSequenceClassification.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassification.forward`,  you can safely ignore this message.
[INFO|trainer.py:2907] 2023-02-14 23:14:29,245 >> ***** Running Prediction *****
[INFO|trainer.py:2909] 2023-02-14 23:14:29,246 >>   Num examples = 2000
[INFO|trainer.py:2912] 2023-02-14 23:14:29,246 >>   Batch size = 24
100% 84/84 [00:03<00:00, 21.53it/s]
INFO:__main__:***** Predict results None *****
[INFO|modelcard.py:444] 2023-02-14 23:14:33,320 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.9330000281333923}]}
  • full dataset
  • custom head
!python run_glue.py \
  --cache_dir roberta_gpt_cache/custom_gpt2 \
  --model_name_or_path gpt2 \
  --custom_model gpt2_custom  \
  --train_file data/train.json  \
  --validation_file data/valid.json \
  --test_file data/test.json  \
  --per_device_train_batch_size 24  \
  --per_device_eval_batch_size 24 \
  --do_train  \
  --do_eval \
  --do_predict  \
  --max_seq_length 128  \
  --learning_rate 2e-5  \
  --num_train_epochs 1  \
  --output_dir roberta_gpt_out/gpt2_custom  \
  --overwrite_output_dir \
  --eval_steps 250 \
  --evaluation_strategy steps \
  --metric_for_best_model accuracy \
  --logging_steps 100 \
  --save_total_limit 5 \
  --max_steps 2500 \
  --load_best_model_at_end True 
2023-02-15 16:43:47.454540: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-15 16:43:47.587338: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-02-15 16:43:48.351405: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-15 16:43:48.351528: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-02-15 16:43:48.351554: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:__main__:Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
INFO:__main__:Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=True,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=250,
evaluation_strategy=steps,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=True,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=2e-05,
length_column_name=length,
load_best_model_at_end=True,
local_rank=-1,
log_level=passive,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=roberta_gpt_out/gpt2_custom/runs/Feb15_16-43-51_b7fb20e65b38,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=100,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=2500,
metric_for_best_model=accuracy,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
optim=adamw_hf,
optim_args=None,
output_dir=roberta_gpt_out/gpt2_custom,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=24,
per_device_train_batch_size=24,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=roberta_gpt_out/gpt2_custom,
save_on_each_node=False,
save_steps=500,
save_strategy=steps,
save_total_limit=5,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
INFO:__main__:load a local file for train: data/train.json
INFO:__main__:load a local file for validation: data/valid.json
INFO:__main__:load a local file for test: data/test.json
WARNING:datasets.builder:Using custom data configuration default-2fc7d0d25bce81a9
INFO:datasets.info:Loading Dataset Infos from /usr/local/lib/python3.8/dist-packages/datasets/packaged_modules/json
INFO:datasets.builder:Generating dataset json (/content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
Downloading and preparing dataset json/default to /content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100% 3/3 [00:00<00:00, 14529.92it/s]
INFO:datasets.download.download_manager:Downloading took 0.0 min
INFO:datasets.download.download_manager:Checksum Computation took 0.0 min
Extracting data files: 100% 3/3 [00:00<00:00, 1944.51it/s]
INFO:datasets.utils.info_utils:Unable to verify checksums.
INFO:datasets.builder:Generating train split
INFO:datasets.builder:Generating validation split
INFO:datasets.builder:Generating test split
INFO:datasets.utils.info_utils:Unable to verify splits sizes.
Dataset json downloaded and prepared to /content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.
100% 3/3 [00:00<00:00, 936.72it/s]
Downloading (…)lve/main/config.json: 100% 665/665 [00:00<00:00, 116kB/s]
[INFO|configuration_utils.py:660] 2023-02-15 16:43:53,242 >> loading configuration file config.json from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json
[INFO|configuration_utils.py:712] 2023-02-15 16:43:53,243 >> Model config GPT2Config {
  "_name_or_path": "gpt2",
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1",
    "2": "LABEL_2",
    "3": "LABEL_3",
    "4": "LABEL_4",
    "5": "LABEL_5"
  },
  "initializer_range": 0.02,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1,
    "LABEL_2": 2,
    "LABEL_3": 3,
    "LABEL_4": 4,
    "LABEL_5": 5
  },
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 12,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "transformers_version": "4.26.1",
  "use_cache": true,
  "vocab_size": 50257
}

[INFO|tokenization_auto.py:458] 2023-02-15 16:43:53,504 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:660] 2023-02-15 16:43:53,769 >> loading configuration file config.json from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json
[INFO|configuration_utils.py:712] 2023-02-15 16:43:53,769 >> Model config GPT2Config {
  "_name_or_path": "gpt2",
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "initializer_range": 0.02,
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 12,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "transformers_version": "4.26.1",
  "use_cache": true,
  "vocab_size": 50257
}

Downloading (…)olve/main/vocab.json: 100% 1.04M/1.04M [00:00<00:00, 3.44MB/s]
Downloading (…)olve/main/merges.txt: 100% 456k/456k [00:00<00:00, 1.88MB/s]
Downloading (…)/main/tokenizer.json: 100% 1.36M/1.36M [00:00<00:00, 3.74MB/s]
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:43:57,245 >> loading file vocab.json from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/vocab.json
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:43:57,245 >> loading file merges.txt from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/merges.txt
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:43:57,245 >> loading file tokenizer.json from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/tokenizer.json
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:43:57,245 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:43:57,245 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1802] 2023-02-15 16:43:57,245 >> loading file tokenizer_config.json from cache at None
[INFO|configuration_utils.py:660] 2023-02-15 16:43:57,245 >> loading configuration file config.json from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json
[INFO|configuration_utils.py:712] 2023-02-15 16:43:57,246 >> Model config GPT2Config {
  "_name_or_path": "gpt2",
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "initializer_range": 0.02,
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 12,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "transformers_version": "4.26.1",
  "use_cache": true,
  "vocab_size": 50257
}

INFO:__main__:Using hidden states in model: False
INFO:__main__:Using implementation from class: GPT2ForSequenceClassificationCustomFIX
Downloading (…)"pytorch_model.bin";: 100% 548M/548M [00:03<00:00, 166MB/s]
[INFO|modeling_utils.py:2275] 2023-02-15 16:44:01,011 >> loading weights file pytorch_model.bin from cache at roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/pytorch_model.bin
[INFO|modeling_utils.py:2857] 2023-02-15 16:44:03,582 >> All model checkpoint weights were used when initializing GPT2ForSequenceClassificationCustomFIX.

[WARNING|modeling_utils.py:2859] 2023-02-15 16:44:03,582 >> Some weights of GPT2ForSequenceClassificationCustomFIX were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.dense_3_hidden.weight', 'score.dense_1_input.bias', 'score.dense_2_hidden.bias', 'score.out_proj.weight', 'score.dense_2.bias', 'score.dense_4.bias', 'score.dense_2_hidden.weight', 'score.dense_3.weight', 'score.dense_4.weight', 'score.dense_2.weight', 'score.dense_1_input.weight', 'score.dense_1_hidden.weight', 'score.dense_1_hidden.bias', 'score.dense_3_hidden.bias', 'score.dense_3.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[ERROR|tokenization_utils_base.py:1042] 2023-02-15 16:44:03,591 >> Using pad_token, but it is not set yet.
INFO:__main__:Set PAD token to EOS: <|endoftext|>
transformer.wte.weight
transformer.wpe.weight
transformer.h.0.ln_1.weight
transformer.h.0.ln_1.bias
transformer.h.0.attn.c_attn.weight
transformer.h.0.attn.c_attn.bias
transformer.h.0.attn.c_proj.weight
transformer.h.0.attn.c_proj.bias
transformer.h.0.ln_2.weight
transformer.h.0.ln_2.bias
transformer.h.0.mlp.c_fc.weight
transformer.h.0.mlp.c_fc.bias
transformer.h.0.mlp.c_proj.weight
transformer.h.0.mlp.c_proj.bias
transformer.h.1.ln_1.weight
transformer.h.1.ln_1.bias
transformer.h.1.attn.c_attn.weight
transformer.h.1.attn.c_attn.bias
transformer.h.1.attn.c_proj.weight
transformer.h.1.attn.c_proj.bias
transformer.h.1.ln_2.weight
transformer.h.1.ln_2.bias
transformer.h.1.mlp.c_fc.weight
transformer.h.1.mlp.c_fc.bias
transformer.h.1.mlp.c_proj.weight
transformer.h.1.mlp.c_proj.bias
transformer.h.2.ln_1.weight
transformer.h.2.ln_1.bias
transformer.h.2.attn.c_attn.weight
transformer.h.2.attn.c_attn.bias
transformer.h.2.attn.c_proj.weight
transformer.h.2.attn.c_proj.bias
transformer.h.2.ln_2.weight
transformer.h.2.ln_2.bias
transformer.h.2.mlp.c_fc.weight
transformer.h.2.mlp.c_fc.bias
transformer.h.2.mlp.c_proj.weight
transformer.h.2.mlp.c_proj.bias
transformer.h.3.ln_1.weight
transformer.h.3.ln_1.bias
transformer.h.3.attn.c_attn.weight
transformer.h.3.attn.c_attn.bias
transformer.h.3.attn.c_proj.weight
transformer.h.3.attn.c_proj.bias
transformer.h.3.ln_2.weight
transformer.h.3.ln_2.bias
transformer.h.3.mlp.c_fc.weight
transformer.h.3.mlp.c_fc.bias
transformer.h.3.mlp.c_proj.weight
transformer.h.3.mlp.c_proj.bias
transformer.h.4.ln_1.weight
transformer.h.4.ln_1.bias
transformer.h.4.attn.c_attn.weight
transformer.h.4.attn.c_attn.bias
transformer.h.4.attn.c_proj.weight
transformer.h.4.attn.c_proj.bias
transformer.h.4.ln_2.weight
transformer.h.4.ln_2.bias
transformer.h.4.mlp.c_fc.weight
transformer.h.4.mlp.c_fc.bias
transformer.h.4.mlp.c_proj.weight
transformer.h.4.mlp.c_proj.bias
transformer.h.5.ln_1.weight
transformer.h.5.ln_1.bias
transformer.h.5.attn.c_attn.weight
transformer.h.5.attn.c_attn.bias
transformer.h.5.attn.c_proj.weight
transformer.h.5.attn.c_proj.bias
transformer.h.5.ln_2.weight
transformer.h.5.ln_2.bias
transformer.h.5.mlp.c_fc.weight
transformer.h.5.mlp.c_fc.bias
transformer.h.5.mlp.c_proj.weight
transformer.h.5.mlp.c_proj.bias
transformer.h.6.ln_1.weight
transformer.h.6.ln_1.bias
transformer.h.6.attn.c_attn.weight
transformer.h.6.attn.c_attn.bias
transformer.h.6.attn.c_proj.weight
transformer.h.6.attn.c_proj.bias
transformer.h.6.ln_2.weight
transformer.h.6.ln_2.bias
transformer.h.6.mlp.c_fc.weight
transformer.h.6.mlp.c_fc.bias
transformer.h.6.mlp.c_proj.weight
transformer.h.6.mlp.c_proj.bias
transformer.h.7.ln_1.weight
transformer.h.7.ln_1.bias
transformer.h.7.attn.c_attn.weight
transformer.h.7.attn.c_attn.bias
transformer.h.7.attn.c_proj.weight
transformer.h.7.attn.c_proj.bias
transformer.h.7.ln_2.weight
transformer.h.7.ln_2.bias
transformer.h.7.mlp.c_fc.weight
transformer.h.7.mlp.c_fc.bias
transformer.h.7.mlp.c_proj.weight
transformer.h.7.mlp.c_proj.bias
transformer.h.8.ln_1.weight
transformer.h.8.ln_1.bias
transformer.h.8.attn.c_attn.weight
transformer.h.8.attn.c_attn.bias
transformer.h.8.attn.c_proj.weight
transformer.h.8.attn.c_proj.bias
transformer.h.8.ln_2.weight
transformer.h.8.ln_2.bias
transformer.h.8.mlp.c_fc.weight
transformer.h.8.mlp.c_fc.bias
transformer.h.8.mlp.c_proj.weight
transformer.h.8.mlp.c_proj.bias
transformer.h.9.ln_1.weight
transformer.h.9.ln_1.bias
transformer.h.9.attn.c_attn.weight
transformer.h.9.attn.c_attn.bias
transformer.h.9.attn.c_proj.weight
transformer.h.9.attn.c_proj.bias
transformer.h.9.ln_2.weight
transformer.h.9.ln_2.bias
transformer.h.9.mlp.c_fc.weight
transformer.h.9.mlp.c_fc.bias
transformer.h.9.mlp.c_proj.weight
transformer.h.9.mlp.c_proj.bias
transformer.h.10.ln_1.weight
transformer.h.10.ln_1.bias
transformer.h.10.attn.c_attn.weight
transformer.h.10.attn.c_attn.bias
transformer.h.10.attn.c_proj.weight
transformer.h.10.attn.c_proj.bias
transformer.h.10.ln_2.weight
transformer.h.10.ln_2.bias
transformer.h.10.mlp.c_fc.weight
transformer.h.10.mlp.c_fc.bias
transformer.h.10.mlp.c_proj.weight
transformer.h.10.mlp.c_proj.bias
transformer.h.11.ln_1.weight
transformer.h.11.ln_1.bias
transformer.h.11.attn.c_attn.weight
transformer.h.11.attn.c_attn.bias
transformer.h.11.attn.c_proj.weight
transformer.h.11.attn.c_proj.bias
transformer.h.11.ln_2.weight
transformer.h.11.ln_2.bias
transformer.h.11.mlp.c_fc.weight
transformer.h.11.mlp.c_fc.bias
transformer.h.11.mlp.c_proj.weight
transformer.h.11.mlp.c_proj.bias
transformer.ln_f.weight
transformer.ln_f.bias
score.dense_1_input.weight
score.dense_1_input.bias
score.dense_1_hidden.weight
score.dense_1_hidden.bias
score.dense_2.weight
score.dense_2.bias
score.dense_2_hidden.weight
score.dense_2_hidden.bias
score.dense_3.weight
score.dense_3.bias
score.dense_3_hidden.weight
score.dense_3_hidden.bias
score.dense_4.weight
score.dense_4.bias
score.out_proj.weight
Running tokenizer on dataset:   0% 0/16 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-bea742f4a59644e1.arrow
Running tokenizer on dataset: 100% 16/16 [00:00<00:00, 16.16ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-70f9aa9f07dd2d49.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 18.31ba/s]
Running tokenizer on dataset:   0% 0/2 [00:00<?, ?ba/s]INFO:datasets.arrow_dataset:Caching processed dataset at /content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d96eda4fa0456cad.arrow
Running tokenizer on dataset: 100% 2/2 [00:00<00:00, 19.03ba/s]
INFO:__main__:Sample 10476 of the training set: {'label': 0, 'text': 'i do find new friends i m going to try extra hard to make them stay and if i decide that i don t want to feel hurt again and just ride out the last year of school on my own i m going to have to try extra hard not to care what people think of me being a loner', 'input_ids': [72, 466, 1064, 649, 2460, 1312, 285, 1016, 284, 1949, 3131, 1327, 284, 787, 606, 2652, 290, 611, 1312, 5409, 326, 1312, 836, 256, 765, 284, 1254, 5938, 757, 290, 655, 6594, 503, 262, 938, 614, 286, 1524, 319, 616, 898, 1312, 285, 1016, 284, 423, 284, 1949, 3131, 1327, 407, 284, 1337, 644, 661, 892, 286, 502, 852, 257, 300, 14491, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 1824 of the training set: {'label': 1, 'text': 'i asked them to join me in creating a world where all year old girls could grow up feeling hopeful and powerful', 'input_ids': [72, 1965, 606, 284, 4654, 502, 287, 4441, 257, 995, 810, 477, 614, 1468, 4813, 714, 1663, 510, 4203, 17836, 290, 3665, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
INFO:__main__:Sample 409 of the training set: {'label': 2, 'text': 'i feel when you are a caring person you attract other caring people into your life', 'input_ids': [72, 1254, 618, 345, 389, 257, 18088, 1048, 345, 4729, 584, 18088, 661, 656, 534, 1204, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
[INFO|trainer.py:511] 2023-02-15 16:44:06,852 >> max_steps is given, it will override any value given in num_train_epochs
[INFO|trainer.py:710] 2023-02-15 16:44:06,852 >> The following columns in the training set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(
[INFO|trainer.py:1650] 2023-02-15 16:44:06,859 >> ***** Running training *****
[INFO|trainer.py:1651] 2023-02-15 16:44:06,859 >>   Num examples = 16000
[INFO|trainer.py:1652] 2023-02-15 16:44:06,859 >>   Num Epochs = 4
[INFO|trainer.py:1653] 2023-02-15 16:44:06,859 >>   Instantaneous batch size per device = 24
[INFO|trainer.py:1654] 2023-02-15 16:44:06,859 >>   Total train batch size (w. parallel, distributed & accumulation) = 24
[INFO|trainer.py:1655] 2023-02-15 16:44:06,859 >>   Gradient Accumulation steps = 1
[INFO|trainer.py:1656] 2023-02-15 16:44:06,859 >>   Total optimization steps = 2500
[INFO|trainer.py:1657] 2023-02-15 16:44:06,860 >>   Number of trainable parameters = 135071232
{'loss': 1.6316, 'learning_rate': 1.9200000000000003e-05, 'epoch': 0.15}
{'loss': 1.4361, 'learning_rate': 1.8400000000000003e-05, 'epoch': 0.3}
 10% 250/2500 [00:43<06:23,  5.87it/s][INFO|trainer.py:710] 2023-02-15 16:44:50,580 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:44:50,582 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:44:50,582 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:44:50,582 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.51it/s]
  7% 6/84 [00:00<00:03, 21.26it/s]
 11% 9/84 [00:00<00:03, 19.78it/s]
 14% 12/84 [00:00<00:03, 19.17it/s]
 17% 14/84 [00:00<00:03, 18.92it/s]
 19% 16/84 [00:00<00:03, 18.74it/s]
 21% 18/84 [00:00<00:03, 18.61it/s]
 24% 20/84 [00:01<00:03, 18.52it/s]
 26% 22/84 [00:01<00:03, 18.47it/s]
 29% 24/84 [00:01<00:03, 18.43it/s]
 31% 26/84 [00:01<00:03, 18.35it/s]
 33% 28/84 [00:01<00:03, 18.34it/s]
 36% 30/84 [00:01<00:02, 18.34it/s]
 38% 32/84 [00:01<00:02, 18.34it/s]
 40% 34/84 [00:01<00:02, 18.34it/s]
 43% 36/84 [00:01<00:02, 18.32it/s]
 45% 38/84 [00:02<00:02, 18.32it/s]
 48% 40/84 [00:02<00:02, 18.29it/s]
 50% 42/84 [00:02<00:02, 18.29it/s]
 52% 44/84 [00:02<00:02, 18.29it/s]
 55% 46/84 [00:02<00:02, 18.26it/s]
 57% 48/84 [00:02<00:02, 17.77it/s]
 60% 50/84 [00:02<00:01, 17.92it/s]
 62% 52/84 [00:02<00:01, 18.04it/s]
 64% 54/84 [00:02<00:01, 18.13it/s]
 67% 56/84 [00:03<00:01, 18.19it/s]
 69% 58/84 [00:03<00:01, 18.23it/s]
 71% 60/84 [00:03<00:01, 18.21it/s]
 74% 62/84 [00:03<00:01, 18.20it/s]
 76% 64/84 [00:03<00:01, 18.24it/s]
 79% 66/84 [00:03<00:00, 18.20it/s]
 81% 68/84 [00:03<00:00, 18.23it/s]
 83% 70/84 [00:03<00:00, 18.26it/s]
 86% 72/84 [00:03<00:00, 18.24it/s]
 88% 74/84 [00:04<00:00, 18.27it/s]
 90% 76/84 [00:04<00:00, 18.28it/s]
 93% 78/84 [00:04<00:00, 18.22it/s]
 95% 80/84 [00:04<00:00, 18.25it/s]
 98% 82/84 [00:04<00:00, 18.26it/s]
{'eval_loss': 0.8915246725082397, 'eval_accuracy': 0.671999990940094, 'eval_runtime': 4.5739, 'eval_samples_per_second': 437.261, 'eval_steps_per_second': 18.365, 'epoch': 0.37}

 10% 250/2500 [00:48<06:23,  5.87it/s]
{'loss': 0.9392, 'learning_rate': 1.76e-05, 'epoch': 0.45}
{'loss': 0.6756, 'learning_rate': 1.6800000000000002e-05, 'epoch': 0.6}
{'loss': 0.4982, 'learning_rate': 1.6000000000000003e-05, 'epoch': 0.75}
 20% 500/2500 [01:30<05:40,  5.88it/s][INFO|trainer.py:710] 2023-02-15 16:45:37,856 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:45:37,857 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:45:37,858 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:45:37,858 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.34it/s]
  7% 6/84 [00:00<00:03, 21.07it/s]
 11% 9/84 [00:00<00:03, 19.58it/s]
 14% 12/84 [00:00<00:03, 19.09it/s]
 17% 14/84 [00:00<00:03, 18.89it/s]
 19% 16/84 [00:00<00:03, 18.74it/s]
 21% 18/84 [00:00<00:03, 18.57it/s]
 24% 20/84 [00:01<00:03, 18.51it/s]
 26% 22/84 [00:01<00:03, 18.46it/s]
 29% 24/84 [00:01<00:03, 18.40it/s]
 31% 26/84 [00:01<00:03, 18.39it/s]
 33% 28/84 [00:01<00:03, 18.38it/s]
 36% 30/84 [00:01<00:02, 18.38it/s]
 38% 32/84 [00:01<00:02, 18.38it/s]
 40% 34/84 [00:01<00:02, 18.38it/s]
 43% 36/84 [00:01<00:02, 18.38it/s]
 45% 38/84 [00:02<00:02, 18.36it/s]
 48% 40/84 [00:02<00:02, 18.27it/s]
 50% 42/84 [00:02<00:02, 18.19it/s]
 52% 44/84 [00:02<00:02, 17.80it/s]
 55% 46/84 [00:02<00:02, 17.94it/s]
 57% 48/84 [00:02<00:01, 18.07it/s]
 60% 50/84 [00:02<00:01, 18.13it/s]
 62% 52/84 [00:02<00:01, 18.20it/s]
 64% 54/84 [00:02<00:01, 18.23it/s]
 67% 56/84 [00:03<00:01, 18.24it/s]
 69% 58/84 [00:03<00:01, 18.25it/s]
 71% 60/84 [00:03<00:01, 18.26it/s]
 74% 62/84 [00:03<00:01, 18.27it/s]
 76% 64/84 [00:03<00:01, 18.29it/s]
 79% 66/84 [00:03<00:00, 18.32it/s]
 81% 68/84 [00:03<00:00, 18.33it/s]
 83% 70/84 [00:03<00:00, 18.34it/s]
 86% 72/84 [00:03<00:00, 18.33it/s]
 88% 74/84 [00:03<00:00, 18.32it/s]
 90% 76/84 [00:04<00:00, 18.30it/s]
 93% 78/84 [00:04<00:00, 18.30it/s]
 95% 80/84 [00:04<00:00, 18.31it/s]
 98% 82/84 [00:04<00:00, 18.33it/s]
{'eval_loss': 0.37129098176956177, 'eval_accuracy': 0.8725000023841858, 'eval_runtime': 4.5697, 'eval_samples_per_second': 437.663, 'eval_steps_per_second': 18.382, 'epoch': 0.75}

 20% 500/2500 [01:35<05:40,  5.88it/s]
                                   [INFO|trainer.py:2709] 2023-02-15 16:45:42,428 >> Saving model checkpoint to roberta_gpt_out/gpt2_custom/checkpoint-500
[INFO|configuration_utils.py:453] 2023-02-15 16:45:42,429 >> Configuration saved in roberta_gpt_out/gpt2_custom/checkpoint-500/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:45:43,202 >> Model weights saved in roberta_gpt_out/gpt2_custom/checkpoint-500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:45:43,202 >> tokenizer config file saved in roberta_gpt_out/gpt2_custom/checkpoint-500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:45:43,203 >> Special tokens file saved in roberta_gpt_out/gpt2_custom/checkpoint-500/special_tokens_map.json
{'loss': 0.4123, 'learning_rate': 1.5200000000000002e-05, 'epoch': 0.9}
{'loss': 0.3322, 'learning_rate': 1.4400000000000001e-05, 'epoch': 1.05}
 30% 750/2500 [02:20<04:58,  5.87it/s][INFO|trainer.py:710] 2023-02-15 16:46:27,753 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:46:27,754 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:46:27,754 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:46:27,755 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.45it/s]
  7% 6/84 [00:00<00:03, 21.13it/s]
 11% 9/84 [00:00<00:03, 19.76it/s]
 14% 12/84 [00:00<00:03, 19.18it/s]
 17% 14/84 [00:00<00:03, 18.93it/s]
 19% 16/84 [00:00<00:03, 18.75it/s]
 21% 18/84 [00:00<00:03, 18.63it/s]
 24% 20/84 [00:01<00:03, 18.54it/s]
 26% 22/84 [00:01<00:03, 18.48it/s]
 29% 24/84 [00:01<00:03, 18.43it/s]
 31% 26/84 [00:01<00:03, 18.39it/s]
 33% 28/84 [00:01<00:03, 18.37it/s]
 36% 30/84 [00:01<00:02, 18.33it/s]
 38% 32/84 [00:01<00:02, 18.30it/s]
 40% 34/84 [00:01<00:02, 18.30it/s]
 43% 36/84 [00:01<00:02, 18.29it/s]
 45% 38/84 [00:02<00:02, 18.30it/s]
 48% 40/84 [00:02<00:02, 18.28it/s]
 50% 42/84 [00:02<00:02, 18.21it/s]
 52% 44/84 [00:02<00:02, 18.23it/s]
 55% 46/84 [00:02<00:02, 18.26it/s]
 57% 48/84 [00:02<00:01, 18.28it/s]
 60% 50/84 [00:02<00:01, 18.28it/s]
 62% 52/84 [00:02<00:01, 18.26it/s]
 64% 54/84 [00:02<00:01, 18.29it/s]
 67% 56/84 [00:03<00:01, 18.30it/s]
 69% 58/84 [00:03<00:01, 18.25it/s]
 71% 60/84 [00:03<00:01, 18.27it/s]
 74% 62/84 [00:03<00:01, 18.30it/s]
 76% 64/84 [00:03<00:01, 18.30it/s]
 79% 66/84 [00:03<00:00, 18.32it/s]
 81% 68/84 [00:03<00:00, 18.33it/s]
 83% 70/84 [00:03<00:00, 18.32it/s]
 86% 72/84 [00:03<00:00, 18.28it/s]
 88% 74/84 [00:03<00:00, 18.29it/s]
 90% 76/84 [00:04<00:00, 18.31it/s]
 93% 78/84 [00:04<00:00, 18.29it/s]
 95% 80/84 [00:04<00:00, 18.28it/s]
 98% 82/84 [00:04<00:00, 18.30it/s]
{'eval_loss': 0.24756397306919098, 'eval_accuracy': 0.9194999933242798, 'eval_runtime': 4.5615, 'eval_samples_per_second': 438.453, 'eval_steps_per_second': 18.415, 'epoch': 1.12}

 30% 750/2500 [02:25<04:58,  5.87it/s]
{'loss': 0.2514, 'learning_rate': 1.3600000000000002e-05, 'epoch': 1.2}
{'loss': 0.2498, 'learning_rate': 1.2800000000000001e-05, 'epoch': 1.35}
{'loss': 0.2478, 'learning_rate': 1.2e-05, 'epoch': 1.5}
 40% 1000/2500 [03:08<04:15,  5.86it/s][INFO|trainer.py:710] 2023-02-15 16:47:15,045 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:47:15,047 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:47:15,047 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:47:15,047 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.55it/s]
  7% 6/84 [00:00<00:03, 21.29it/s]
 11% 9/84 [00:00<00:03, 19.85it/s]
 14% 12/84 [00:00<00:03, 19.24it/s]
 17% 14/84 [00:00<00:03, 18.98it/s]
 19% 16/84 [00:00<00:03, 18.80it/s]
 21% 18/84 [00:00<00:03, 18.64it/s]
 24% 20/84 [00:01<00:03, 18.52it/s]
 26% 22/84 [00:01<00:03, 18.47it/s]
 29% 24/84 [00:01<00:03, 18.44it/s]
 31% 26/84 [00:01<00:03, 18.42it/s]
 33% 28/84 [00:01<00:03, 17.97it/s]
 36% 30/84 [00:01<00:02, 18.08it/s]
 38% 32/84 [00:01<00:02, 18.14it/s]
 40% 34/84 [00:01<00:02, 18.21it/s]
 43% 36/84 [00:01<00:02, 18.22it/s]
 45% 38/84 [00:02<00:02, 18.25it/s]
 48% 40/84 [00:02<00:02, 18.26it/s]
 50% 42/84 [00:02<00:02, 18.28it/s]
 52% 44/84 [00:02<00:02, 18.31it/s]
 55% 46/84 [00:02<00:02, 18.33it/s]
 57% 48/84 [00:02<00:01, 18.34it/s]
 60% 50/84 [00:02<00:01, 18.34it/s]
 62% 52/84 [00:02<00:01, 18.35it/s]
 64% 54/84 [00:02<00:01, 18.34it/s]
 67% 56/84 [00:03<00:01, 18.30it/s]
 69% 58/84 [00:03<00:01, 18.30it/s]
 71% 60/84 [00:03<00:01, 18.30it/s]
 74% 62/84 [00:03<00:01, 18.32it/s]
 76% 64/84 [00:03<00:01, 18.32it/s]
 79% 66/84 [00:03<00:00, 18.32it/s]
 81% 68/84 [00:03<00:00, 18.32it/s]
 83% 70/84 [00:03<00:00, 18.33it/s]
 86% 72/84 [00:03<00:00, 18.32it/s]
 88% 74/84 [00:03<00:00, 18.31it/s]
 90% 76/84 [00:04<00:00, 18.30it/s]
 93% 78/84 [00:04<00:00, 18.32it/s]
 95% 80/84 [00:04<00:00, 18.33it/s]
 98% 82/84 [00:04<00:00, 18.34it/s]
{'eval_loss': 0.19117043912410736, 'eval_accuracy': 0.9279999732971191, 'eval_runtime': 4.5621, 'eval_samples_per_second': 438.393, 'eval_steps_per_second': 18.412, 'epoch': 1.5}

 40% 1000/2500 [03:12<04:15,  5.86it/s]
                                   [INFO|trainer.py:2709] 2023-02-15 16:47:19,610 >> Saving model checkpoint to roberta_gpt_out/gpt2_custom/checkpoint-1000
[INFO|configuration_utils.py:453] 2023-02-15 16:47:19,611 >> Configuration saved in roberta_gpt_out/gpt2_custom/checkpoint-1000/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:47:20,386 >> Model weights saved in roberta_gpt_out/gpt2_custom/checkpoint-1000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:47:20,387 >> tokenizer config file saved in roberta_gpt_out/gpt2_custom/checkpoint-1000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:47:20,387 >> Special tokens file saved in roberta_gpt_out/gpt2_custom/checkpoint-1000/special_tokens_map.json
{'loss': 0.2036, 'learning_rate': 1.1200000000000001e-05, 'epoch': 1.65}
{'loss': 0.2155, 'learning_rate': 1.04e-05, 'epoch': 1.8}
 50% 1250/2500 [03:58<03:34,  5.84it/s][INFO|trainer.py:710] 2023-02-15 16:48:04,913 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:48:04,915 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:48:04,915 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:48:04,915 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.17it/s]
  7% 6/84 [00:00<00:03, 21.17it/s]
 11% 9/84 [00:00<00:03, 19.76it/s]
 14% 12/84 [00:00<00:03, 19.17it/s]
 17% 14/84 [00:00<00:03, 18.83it/s]
 19% 16/84 [00:00<00:03, 18.66it/s]
 21% 18/84 [00:00<00:03, 18.55it/s]
 24% 20/84 [00:01<00:03, 18.40it/s]
 26% 22/84 [00:01<00:03, 18.33it/s]
 29% 24/84 [00:01<00:03, 18.32it/s]
 31% 26/84 [00:01<00:03, 18.30it/s]
 33% 28/84 [00:01<00:03, 18.26it/s]
 36% 30/84 [00:01<00:02, 18.28it/s]
 38% 32/84 [00:01<00:02, 18.30it/s]
 40% 34/84 [00:01<00:02, 18.30it/s]
 43% 36/84 [00:01<00:02, 18.30it/s]
 45% 38/84 [00:02<00:02, 18.30it/s]
 48% 40/84 [00:02<00:02, 18.28it/s]
 50% 42/84 [00:02<00:02, 18.29it/s]
 52% 44/84 [00:02<00:02, 18.29it/s]
 55% 46/84 [00:02<00:02, 18.30it/s]
 57% 48/84 [00:02<00:01, 18.31it/s]
 60% 50/84 [00:02<00:01, 18.32it/s]
 62% 52/84 [00:02<00:01, 18.31it/s]
 64% 54/84 [00:02<00:01, 18.32it/s]
 67% 56/84 [00:03<00:01, 18.32it/s]
 69% 58/84 [00:03<00:01, 18.31it/s]
 71% 60/84 [00:03<00:01, 18.33it/s]
 74% 62/84 [00:03<00:01, 18.34it/s]
 76% 64/84 [00:03<00:01, 18.34it/s]
 79% 66/84 [00:03<00:00, 18.35it/s]
 81% 68/84 [00:03<00:00, 18.37it/s]
 83% 70/84 [00:03<00:00, 18.36it/s]
 86% 72/84 [00:03<00:00, 18.33it/s]
 88% 74/84 [00:03<00:00, 18.35it/s]
 90% 76/84 [00:04<00:00, 18.34it/s]
 93% 78/84 [00:04<00:00, 18.34it/s]
 95% 80/84 [00:04<00:00, 18.35it/s]
 98% 82/84 [00:04<00:00, 18.35it/s]
{'eval_loss': 0.16847124695777893, 'eval_accuracy': 0.9319999814033508, 'eval_runtime': 4.5608, 'eval_samples_per_second': 438.519, 'eval_steps_per_second': 18.418, 'epoch': 1.87}

 50% 1250/2500 [04:02<03:34,  5.84it/s]
{'loss': 0.2466, 'learning_rate': 9.600000000000001e-06, 'epoch': 1.95}
{'loss': 0.1791, 'learning_rate': 8.8e-06, 'epoch': 2.1}
{'loss': 0.172, 'learning_rate': 8.000000000000001e-06, 'epoch': 2.25}
 60% 1500/2500 [04:45<02:51,  5.84it/s][INFO|trainer.py:710] 2023-02-15 16:48:52,173 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:48:52,174 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:48:52,175 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:48:52,175 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.56it/s]
  7% 6/84 [00:00<00:03, 21.30it/s]
 11% 9/84 [00:00<00:03, 19.85it/s]
 14% 12/84 [00:00<00:03, 19.17it/s]
 17% 14/84 [00:00<00:03, 18.94it/s]
 19% 16/84 [00:00<00:03, 18.75it/s]
 21% 18/84 [00:00<00:03, 18.62it/s]
 24% 20/84 [00:01<00:03, 18.50it/s]
 26% 22/84 [00:01<00:03, 18.43it/s]
 29% 24/84 [00:01<00:03, 18.38it/s]
 31% 26/84 [00:01<00:03, 18.36it/s]
 33% 28/84 [00:01<00:03, 18.27it/s]
 36% 30/84 [00:01<00:02, 18.29it/s]
 38% 32/84 [00:01<00:02, 18.29it/s]
 40% 34/84 [00:01<00:02, 18.29it/s]
 43% 36/84 [00:01<00:02, 18.27it/s]
 45% 38/84 [00:02<00:02, 18.28it/s]
 48% 40/84 [00:02<00:02, 18.30it/s]
 50% 42/84 [00:02<00:02, 18.27it/s]
 52% 44/84 [00:02<00:02, 18.27it/s]
 55% 46/84 [00:02<00:02, 18.30it/s]
 57% 48/84 [00:02<00:01, 18.32it/s]
 60% 50/84 [00:02<00:01, 18.34it/s]
 62% 52/84 [00:02<00:01, 18.34it/s]
 64% 54/84 [00:02<00:01, 18.36it/s]
 67% 56/84 [00:03<00:01, 18.34it/s]
 69% 58/84 [00:03<00:01, 18.36it/s]
 71% 60/84 [00:03<00:01, 18.37it/s]
 74% 62/84 [00:03<00:01, 18.38it/s]
 76% 64/84 [00:03<00:01, 18.37it/s]
 79% 66/84 [00:03<00:00, 18.37it/s]
 81% 68/84 [00:03<00:00, 18.37it/s]
 83% 70/84 [00:03<00:00, 18.36it/s]
 86% 72/84 [00:03<00:00, 18.36it/s]
 88% 74/84 [00:03<00:00, 18.36it/s]
 90% 76/84 [00:04<00:00, 18.37it/s]
 93% 78/84 [00:04<00:00, 18.39it/s]
 95% 80/84 [00:04<00:00, 18.39it/s]
 98% 82/84 [00:04<00:00, 18.39it/s]
{'eval_loss': 0.18199807405471802, 'eval_accuracy': 0.9304999709129333, 'eval_runtime': 4.5537, 'eval_samples_per_second': 439.207, 'eval_steps_per_second': 18.447, 'epoch': 2.25}

 60% 1500/2500 [04:49<02:51,  5.84it/s]
                                   [INFO|trainer.py:2709] 2023-02-15 16:48:56,729 >> Saving model checkpoint to roberta_gpt_out/gpt2_custom/checkpoint-1500
[INFO|configuration_utils.py:453] 2023-02-15 16:48:56,730 >> Configuration saved in roberta_gpt_out/gpt2_custom/checkpoint-1500/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:48:57,516 >> Model weights saved in roberta_gpt_out/gpt2_custom/checkpoint-1500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:48:57,517 >> tokenizer config file saved in roberta_gpt_out/gpt2_custom/checkpoint-1500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:48:57,517 >> Special tokens file saved in roberta_gpt_out/gpt2_custom/checkpoint-1500/special_tokens_map.json
{'loss': 0.1795, 'learning_rate': 7.2000000000000005e-06, 'epoch': 2.4}
{'loss': 0.1792, 'learning_rate': 6.4000000000000006e-06, 'epoch': 2.55}
 70% 1750/2500 [05:35<02:09,  5.81it/s][INFO|trainer.py:710] 2023-02-15 16:49:42,128 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:49:42,130 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:49:42,130 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:49:42,130 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.47it/s]
  7% 6/84 [00:00<00:03, 21.26it/s]
 11% 9/84 [00:00<00:03, 19.82it/s]
 14% 12/84 [00:00<00:03, 19.21it/s]
 17% 14/84 [00:00<00:03, 18.96it/s]
 19% 16/84 [00:00<00:03, 18.74it/s]
 21% 18/84 [00:00<00:03, 18.61it/s]
 24% 20/84 [00:01<00:03, 18.50it/s]
 26% 22/84 [00:01<00:03, 18.44it/s]
 29% 24/84 [00:01<00:03, 18.39it/s]
 31% 26/84 [00:01<00:03, 18.34it/s]
 33% 28/84 [00:01<00:03, 18.33it/s]
 36% 30/84 [00:01<00:02, 18.32it/s]
 38% 32/84 [00:01<00:02, 18.30it/s]
 40% 34/84 [00:01<00:02, 18.29it/s]
 43% 36/84 [00:01<00:02, 18.30it/s]
 45% 38/84 [00:02<00:02, 18.28it/s]
 48% 40/84 [00:02<00:02, 18.28it/s]
 50% 42/84 [00:02<00:02, 18.28it/s]
 52% 44/84 [00:02<00:02, 18.29it/s]
 55% 46/84 [00:02<00:02, 18.24it/s]
 57% 48/84 [00:02<00:01, 18.27it/s]
 60% 50/84 [00:02<00:01, 18.30it/s]
 62% 52/84 [00:02<00:01, 18.31it/s]
 64% 54/84 [00:02<00:01, 18.31it/s]
 67% 56/84 [00:03<00:01, 18.32it/s]
 69% 58/84 [00:03<00:01, 18.32it/s]
 71% 60/84 [00:03<00:01, 18.34it/s]
 74% 62/84 [00:03<00:01, 18.35it/s]
 76% 64/84 [00:03<00:01, 18.36it/s]
 79% 66/84 [00:03<00:00, 18.34it/s]
 81% 68/84 [00:03<00:00, 18.35it/s]
 83% 70/84 [00:03<00:00, 18.33it/s]
 86% 72/84 [00:03<00:00, 18.33it/s]
 88% 74/84 [00:03<00:00, 18.34it/s]
 90% 76/84 [00:04<00:00, 18.35it/s]
 93% 78/84 [00:04<00:00, 18.37it/s]
 95% 80/84 [00:04<00:00, 18.38it/s]
 98% 82/84 [00:04<00:00, 18.39it/s]
{'eval_loss': 0.1523813158273697, 'eval_accuracy': 0.9334999918937683, 'eval_runtime': 4.5557, 'eval_samples_per_second': 439.008, 'eval_steps_per_second': 18.438, 'epoch': 2.62}

 70% 1750/2500 [05:39<02:09,  5.81it/s]
{'loss': 0.1561, 'learning_rate': 5.600000000000001e-06, 'epoch': 2.7}
{'loss': 0.1587, 'learning_rate': 4.800000000000001e-06, 'epoch': 2.85}
{'loss': 0.1505, 'learning_rate': 4.000000000000001e-06, 'epoch': 3.0}
 80% 2000/2500 [06:22<01:25,  5.88it/s][INFO|trainer.py:710] 2023-02-15 16:50:29,449 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:50:29,451 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:50:29,451 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:50:29,451 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.14it/s]
  7% 6/84 [00:00<00:03, 21.20it/s]
 11% 9/84 [00:00<00:03, 19.77it/s]
 14% 12/84 [00:00<00:03, 19.12it/s]
 17% 14/84 [00:00<00:03, 18.88it/s]
 19% 16/84 [00:00<00:03, 18.72it/s]
 21% 18/84 [00:00<00:03, 18.60it/s]
 24% 20/84 [00:01<00:03, 18.52it/s]
 26% 22/84 [00:01<00:03, 18.47it/s]
 29% 24/84 [00:01<00:03, 18.43it/s]
 31% 26/84 [00:01<00:03, 18.39it/s]
 33% 28/84 [00:01<00:03, 18.37it/s]
 36% 30/84 [00:01<00:02, 18.36it/s]
 38% 32/84 [00:01<00:02, 18.35it/s]
 40% 34/84 [00:01<00:02, 18.34it/s]
 43% 36/84 [00:01<00:02, 18.33it/s]
 45% 38/84 [00:02<00:02, 18.32it/s]
 48% 40/84 [00:02<00:02, 18.34it/s]
 50% 42/84 [00:02<00:02, 18.33it/s]
 52% 44/84 [00:02<00:02, 18.32it/s]
 55% 46/84 [00:02<00:02, 18.32it/s]
 57% 48/84 [00:02<00:01, 18.28it/s]
 60% 50/84 [00:02<00:01, 18.12it/s]
 62% 52/84 [00:02<00:01, 18.17it/s]
 64% 54/84 [00:02<00:01, 18.22it/s]
 67% 56/84 [00:03<00:01, 18.26it/s]
 69% 58/84 [00:03<00:01, 18.28it/s]
 71% 60/84 [00:03<00:01, 18.29it/s]
 74% 62/84 [00:03<00:01, 18.30it/s]
 76% 64/84 [00:03<00:01, 18.32it/s]
 79% 66/84 [00:03<00:00, 18.33it/s]
 81% 68/84 [00:03<00:00, 18.30it/s]
 83% 70/84 [00:03<00:00, 18.29it/s]
 86% 72/84 [00:03<00:00, 18.28it/s]
 88% 74/84 [00:03<00:00, 18.27it/s]
 90% 76/84 [00:04<00:00, 18.28it/s]
 93% 78/84 [00:04<00:00, 18.27it/s]
 95% 80/84 [00:04<00:00, 18.27it/s]
 98% 82/84 [00:04<00:00, 18.25it/s]
{'eval_loss': 0.15084920823574066, 'eval_accuracy': 0.9375, 'eval_runtime': 4.5639, 'eval_samples_per_second': 438.222, 'eval_steps_per_second': 18.405, 'epoch': 3.0}

 80% 2000/2500 [06:27<01:25,  5.88it/s]
                                   [INFO|trainer.py:2709] 2023-02-15 16:50:34,016 >> Saving model checkpoint to roberta_gpt_out/gpt2_custom/checkpoint-2000
[INFO|configuration_utils.py:453] 2023-02-15 16:50:34,017 >> Configuration saved in roberta_gpt_out/gpt2_custom/checkpoint-2000/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:50:34,796 >> Model weights saved in roberta_gpt_out/gpt2_custom/checkpoint-2000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:50:34,797 >> tokenizer config file saved in roberta_gpt_out/gpt2_custom/checkpoint-2000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:50:34,797 >> Special tokens file saved in roberta_gpt_out/gpt2_custom/checkpoint-2000/special_tokens_map.json
{'loss': 0.137, 'learning_rate': 3.2000000000000003e-06, 'epoch': 3.15}
{'loss': 0.1429, 'learning_rate': 2.4000000000000003e-06, 'epoch': 3.3}
 90% 2250/2500 [07:12<00:42,  5.87it/s][INFO|trainer.py:710] 2023-02-15 16:51:19,304 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:51:19,305 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:51:19,306 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:51:19,306 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.44it/s]
  7% 6/84 [00:00<00:03, 21.26it/s]
 11% 9/84 [00:00<00:03, 19.82it/s]
 14% 12/84 [00:00<00:03, 19.22it/s]
 17% 14/84 [00:00<00:03, 18.93it/s]
 19% 16/84 [00:00<00:03, 18.74it/s]
 21% 18/84 [00:00<00:03, 18.59it/s]
 24% 20/84 [00:01<00:03, 18.51it/s]
 26% 22/84 [00:01<00:03, 18.43it/s]
 29% 24/84 [00:01<00:03, 18.41it/s]
 31% 26/84 [00:01<00:03, 18.30it/s]
 33% 28/84 [00:01<00:03, 18.31it/s]
 36% 30/84 [00:01<00:02, 18.31it/s]
 38% 32/84 [00:01<00:02, 18.30it/s]
 40% 34/84 [00:01<00:02, 18.28it/s]
 43% 36/84 [00:01<00:02, 18.28it/s]
 45% 38/84 [00:02<00:02, 18.29it/s]
 48% 40/84 [00:02<00:02, 18.29it/s]
 50% 42/84 [00:02<00:02, 18.30it/s]
 52% 44/84 [00:02<00:02, 18.29it/s]
 55% 46/84 [00:02<00:02, 18.31it/s]
 57% 48/84 [00:02<00:01, 18.29it/s]
 60% 50/84 [00:02<00:01, 18.28it/s]
 62% 52/84 [00:02<00:01, 18.28it/s]
 64% 54/84 [00:02<00:01, 18.28it/s]
 67% 56/84 [00:03<00:01, 18.30it/s]
 69% 58/84 [00:03<00:01, 18.32it/s]
 71% 60/84 [00:03<00:01, 18.32it/s]
 74% 62/84 [00:03<00:01, 18.32it/s]
 76% 64/84 [00:03<00:01, 18.30it/s]
 79% 66/84 [00:03<00:00, 18.29it/s]
 81% 68/84 [00:03<00:00, 18.28it/s]
 83% 70/84 [00:03<00:00, 18.21it/s]
 86% 72/84 [00:03<00:00, 18.22it/s]
 88% 74/84 [00:03<00:00, 17.99it/s]
 90% 76/84 [00:04<00:00, 18.07it/s]
 93% 78/84 [00:04<00:00, 18.13it/s]
 95% 80/84 [00:04<00:00, 18.14it/s]
 98% 82/84 [00:04<00:00, 18.18it/s]
{'eval_loss': 0.15722069144248962, 'eval_accuracy': 0.9319999814033508, 'eval_runtime': 4.5701, 'eval_samples_per_second': 437.626, 'eval_steps_per_second': 18.38, 'epoch': 3.37}

 90% 2250/2500 [07:17<00:42,  5.87it/s]
{'loss': 0.1575, 'learning_rate': 1.6000000000000001e-06, 'epoch': 3.45}
{'loss': 0.1367, 'learning_rate': 8.000000000000001e-07, 'epoch': 3.6}
{'loss': 0.1238, 'learning_rate': 0.0, 'epoch': 3.75}
100% 2500/2500 [07:59<00:00,  5.86it/s][INFO|trainer.py:710] 2023-02-15 16:52:06,584 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:52:06,586 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:52:06,586 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:52:06,586 >>   Batch size = 24

  0% 0/84 [00:00<?, ?it/s]
  4% 3/84 [00:00<00:02, 27.52it/s]
  7% 6/84 [00:00<00:03, 21.17it/s]
 11% 9/84 [00:00<00:03, 19.74it/s]
 14% 12/84 [00:00<00:03, 19.15it/s]
 17% 14/84 [00:00<00:03, 18.93it/s]
 19% 16/84 [00:00<00:03, 18.77it/s]
 21% 18/84 [00:00<00:03, 18.65it/s]
 24% 20/84 [00:01<00:03, 18.58it/s]
 26% 22/84 [00:01<00:03, 18.52it/s]
 29% 24/84 [00:01<00:03, 18.41it/s]
 31% 26/84 [00:01<00:03, 18.38it/s]
 33% 28/84 [00:01<00:03, 18.37it/s]
 36% 30/84 [00:01<00:02, 18.35it/s]
 38% 32/84 [00:01<00:02, 18.36it/s]
 40% 34/84 [00:01<00:02, 18.35it/s]
 43% 36/84 [00:01<00:02, 18.36it/s]
 45% 38/84 [00:02<00:02, 18.36it/s]
 48% 40/84 [00:02<00:02, 18.36it/s]
 50% 42/84 [00:02<00:02, 18.37it/s]
 52% 44/84 [00:02<00:02, 18.32it/s]
 55% 46/84 [00:02<00:02, 18.34it/s]
 57% 48/84 [00:02<00:01, 18.35it/s]
 60% 50/84 [00:02<00:01, 18.36it/s]
 62% 52/84 [00:02<00:01, 18.27it/s]
 64% 54/84 [00:02<00:01, 18.31it/s]
 67% 56/84 [00:03<00:01, 18.34it/s]
 69% 58/84 [00:03<00:01, 18.35it/s]
 71% 60/84 [00:03<00:01, 18.35it/s]
 74% 62/84 [00:03<00:01, 18.35it/s]
 76% 64/84 [00:03<00:01, 18.32it/s]
 79% 66/84 [00:03<00:00, 18.34it/s]
 81% 68/84 [00:03<00:00, 18.35it/s]
 83% 70/84 [00:03<00:00, 18.37it/s]
 86% 72/84 [00:03<00:00, 18.37it/s]
 88% 74/84 [00:03<00:00, 18.38it/s]
 90% 76/84 [00:04<00:00, 18.39it/s]
 93% 78/84 [00:04<00:00, 18.39it/s]
 95% 80/84 [00:04<00:00, 18.38it/s]
 98% 82/84 [00:04<00:00, 18.35it/s]
{'eval_loss': 0.14900894463062286, 'eval_accuracy': 0.934499979019165, 'eval_runtime': 4.5505, 'eval_samples_per_second': 439.516, 'eval_steps_per_second': 18.46, 'epoch': 3.75}

100% 2500/2500 [08:04<00:00,  5.86it/s]
                                   [INFO|trainer.py:2709] 2023-02-15 16:52:11,137 >> Saving model checkpoint to roberta_gpt_out/gpt2_custom/checkpoint-2500
[INFO|configuration_utils.py:453] 2023-02-15 16:52:11,138 >> Configuration saved in roberta_gpt_out/gpt2_custom/checkpoint-2500/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:52:11,913 >> Model weights saved in roberta_gpt_out/gpt2_custom/checkpoint-2500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:52:11,914 >> tokenizer config file saved in roberta_gpt_out/gpt2_custom/checkpoint-2500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:52:11,914 >> Special tokens file saved in roberta_gpt_out/gpt2_custom/checkpoint-2500/special_tokens_map.json
[INFO|trainer.py:1901] 2023-02-15 16:52:13,776 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)


[INFO|trainer.py:2025] 2023-02-15 16:52:13,776 >> Loading best model from roberta_gpt_out/gpt2_custom/checkpoint-2000 (score: 0.9375).
{'train_runtime': 487.3719, 'train_samples_per_second': 123.109, 'train_steps_per_second': 5.13, 'train_loss': 0.36852474327087403, 'epoch': 3.75}
100% 2500/2500 [08:07<00:00,  5.13it/s]
[INFO|trainer.py:2709] 2023-02-15 16:52:14,234 >> Saving model checkpoint to roberta_gpt_out/gpt2_custom
[INFO|configuration_utils.py:453] 2023-02-15 16:52:14,235 >> Configuration saved in roberta_gpt_out/gpt2_custom/config.json
[INFO|modeling_utils.py:1704] 2023-02-15 16:52:15,266 >> Model weights saved in roberta_gpt_out/gpt2_custom/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 16:52:15,267 >> tokenizer config file saved in roberta_gpt_out/gpt2_custom/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 16:52:15,267 >> Special tokens file saved in roberta_gpt_out/gpt2_custom/special_tokens_map.json
***** train metrics *****
  epoch                    =       3.75
  train_loss               =     0.3685
  train_runtime            = 0:08:07.37
  train_samples            =      16000
  train_samples_per_second =    123.109
  train_steps_per_second   =       5.13
INFO:__main__:*** Evaluate ***
[INFO|trainer.py:710] 2023-02-15 16:52:15,371 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:52:15,373 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 16:52:15,373 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:52:15,373 >>   Batch size = 24
100% 84/84 [00:04<00:00, 18.51it/s]
***** eval metrics *****
  epoch                   =       3.75
  eval_accuracy           =     0.9375
  eval_loss               =     0.1508
  eval_runtime            = 0:00:04.60
  eval_samples            =       2000
  eval_samples_per_second =    434.443
  eval_steps_per_second   =     18.247
INFO:__main__:*** Predict ***
[INFO|trainer.py:710] 2023-02-15 16:52:19,980 >> The following columns in the test set don't have a corresponding argument in `GPT2ForSequenceClassificationCustomFIX.forward` and have been ignored: text. If text are not expected by `GPT2ForSequenceClassificationCustomFIX.forward`,  you can safely ignore this message.
[INFO|trainer.py:2964] 2023-02-15 16:52:19,981 >> ***** Running Prediction *****
[INFO|trainer.py:2966] 2023-02-15 16:52:19,981 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 16:52:19,981 >>   Batch size = 24
100% 84/84 [00:04<00:00, 18.73it/s]
INFO:__main__:***** Predict results None *****
[INFO|modelcard.py:449] 2023-02-15 16:52:24,864 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.9375}]}

T5

  • full data
  • model T5
  • sequnece length: 128
  • training epoch: 1
  • first few layers frozen
!python run_translation.py \
  --cache_dir t5_large_cache/t5_v1_1 \
  --model_name_or_path "google/t5-v1_1-large" \
  --train_file data/s2s-train.json \
  --validation_file data/s2s-valid.json \
  --test_file data/s2s-test.json \
  --per_device_train_batch_size 8 \
  --per_device_eval_batch_size 8 \
  --source_lang "text" \
  --target_lang "label" \
  --source_prefix "emotion classification" \
  --max_source_length 256 \
  --max_target_length 128 \
  --generation_max_length 128 \
  --do_train \
  --do_eval \
  --do_predict \
  --predict_with_generate \
  --num_train_epochs 1 \
  --output_dir t5_large_out/t5_v1_1  \
  --overwrite_output_dir \
  --eval_steps 250 \
  --evaluation_strategy steps \
  --metric_for_best_model accuracy \
  --logging_steps 100 \
  --save_total_limit 5 \
  --max_steps 2500 \
  --load_best_model_at_end True 
Streaming output truncated to the last 5000 lines.
  "transformers_version": "4.26.1"
}


 28% 71/250 [00:14<00:36,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:28,987 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 29% 72/250 [00:14<00:39,  4.56it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:29,238 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 29% 73/250 [00:14<00:38,  4.59it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:29,453 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 30% 74/250 [00:15<00:37,  4.68it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:29,656 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 30% 75/250 [00:15<00:36,  4.78it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:29,856 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 30% 76/250 [00:15<00:36,  4.73it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:30,072 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 31% 77/250 [00:15<00:36,  4.78it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:30,276 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 31% 78/250 [00:15<00:35,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:30,476 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 32% 79/250 [00:16<00:34,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:30,676 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 32% 80/250 [00:16<00:34,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:30,880 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 32% 81/250 [00:16<00:34,  4.88it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:31,086 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 33% 82/250 [00:16<00:34,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:31,285 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 33% 83/250 [00:16<00:33,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:31,486 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 34% 84/250 [00:17<00:33,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:31,689 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 34% 85/250 [00:17<00:33,  4.99it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:31,885 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 34% 86/250 [00:17<00:33,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:32,092 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 35% 87/250 [00:17<00:32,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:32,294 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 35% 88/250 [00:17<00:32,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:32,500 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 36% 89/250 [00:18<00:32,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:32,704 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 36% 90/250 [00:18<00:32,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:32,903 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 36% 91/250 [00:18<00:32,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:33,119 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 37% 92/250 [00:18<00:32,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:33,319 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 37% 93/250 [00:18<00:31,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:33,518 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 38% 94/250 [00:19<00:31,  4.99it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:33,713 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 38% 95/250 [00:19<00:31,  4.97it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:33,916 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 38% 96/250 [00:19<00:31,  4.97it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:34,118 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 39% 97/250 [00:19<00:30,  4.98it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:34,317 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 39% 98/250 [00:19<00:30,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:34,523 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 40% 99/250 [00:20<00:30,  4.96it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:34,724 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 40% 100/250 [00:20<00:30,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:34,926 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 40% 101/250 [00:20<00:30,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:35,130 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 41% 102/250 [00:20<00:30,  4.88it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:35,341 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 41% 103/250 [00:20<00:29,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:35,540 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 42% 104/250 [00:21<00:29,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:35,739 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 42% 105/250 [00:21<00:29,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:35,941 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 42% 106/250 [00:21<00:29,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:36,145 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 43% 107/250 [00:21<00:29,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:36,348 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 43% 108/250 [00:21<00:28,  4.97it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:36,545 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 44% 109/250 [00:22<00:29,  4.81it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:36,768 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 44% 110/250 [00:22<00:28,  4.83it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:36,974 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 44% 111/250 [00:22<00:28,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:37,175 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 45% 112/250 [00:22<00:28,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:37,388 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 45% 113/250 [00:22<00:28,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:37,591 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 46% 114/250 [00:23<00:28,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:37,796 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 46% 115/250 [00:23<00:27,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:38,000 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 46% 116/250 [00:23<00:27,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:38,209 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 47% 117/250 [00:23<00:27,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:38,414 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 47% 118/250 [00:24<00:27,  4.81it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:38,626 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 48% 119/250 [00:24<00:27,  4.80it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:38,836 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 48% 120/250 [00:24<00:27,  4.69it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:39,060 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 48% 121/250 [00:24<00:28,  4.46it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:39,311 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 49% 122/250 [00:24<00:27,  4.63it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:39,506 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 49% 123/250 [00:25<00:26,  4.72it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:39,709 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 50% 124/250 [00:25<00:26,  4.80it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:39,909 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 50% 125/250 [00:25<00:25,  4.81it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:40,115 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 50% 126/250 [00:25<00:25,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:40,316 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 51% 127/250 [00:25<00:25,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:40,514 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 51% 128/250 [00:26<00:24,  4.97it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:40,711 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 52% 129/250 [00:26<00:24,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:40,917 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 52% 130/250 [00:26<00:24,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:41,117 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 52% 131/250 [00:26<00:24,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:41,322 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 53% 132/250 [00:26<00:23,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:41,524 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 53% 133/250 [00:27<00:24,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:41,736 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 54% 134/250 [00:27<00:23,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:41,940 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 54% 135/250 [00:27<00:23,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:42,144 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 54% 136/250 [00:27<00:23,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:42,344 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 55% 137/250 [00:27<00:22,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:42,545 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 55% 138/250 [00:28<00:22,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:42,751 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 56% 139/250 [00:28<00:22,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:42,955 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 56% 140/250 [00:28<00:22,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:43,157 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 56% 141/250 [00:28<00:22,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:43,357 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 57% 142/250 [00:28<00:21,  4.99it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:43,553 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 57% 143/250 [00:29<00:21,  4.99it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:43,753 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 58% 144/250 [00:29<00:21,  4.98it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:43,955 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 58% 145/250 [00:29<00:21,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:44,160 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 58% 146/250 [00:29<00:21,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:44,363 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 59% 147/250 [00:29<00:20,  4.98it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:44,560 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 59% 148/250 [00:30<00:20,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:44,766 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 60% 149/250 [00:30<00:20,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:44,968 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 60% 150/250 [00:30<00:20,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:45,173 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 60% 151/250 [00:30<00:20,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:45,375 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 61% 152/250 [00:30<00:19,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:45,579 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 61% 153/250 [00:31<00:19,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:45,784 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 62% 154/250 [00:31<00:19,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:45,987 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 62% 155/250 [00:31<00:19,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:46,197 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 62% 156/250 [00:31<00:19,  4.79it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:46,414 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 63% 157/250 [00:31<00:19,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:46,614 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 63% 158/250 [00:32<00:18,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:46,819 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 64% 159/250 [00:32<00:18,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:47,018 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 64% 160/250 [00:32<00:18,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:47,216 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 64% 161/250 [00:32<00:18,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:47,422 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 65% 162/250 [00:33<00:17,  4.96it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:47,620 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 65% 163/250 [00:33<00:17,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:47,834 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 66% 164/250 [00:33<00:17,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:48,033 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 66% 165/250 [00:33<00:17,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:48,237 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 66% 166/250 [00:33<00:17,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:48,445 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 67% 167/250 [00:34<00:17,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:48,653 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 67% 168/250 [00:34<00:16,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:48,860 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 68% 169/250 [00:34<00:16,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:49,064 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 68% 170/250 [00:34<00:16,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:49,266 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 68% 171/250 [00:34<00:16,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:49,469 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 69% 172/250 [00:35<00:15,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:49,671 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 69% 173/250 [00:35<00:15,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:49,873 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 70% 174/250 [00:35<00:15,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:50,079 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 70% 175/250 [00:35<00:15,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:50,284 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 70% 176/250 [00:35<00:15,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:50,486 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 71% 177/250 [00:36<00:14,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:50,690 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 71% 178/250 [00:36<00:14,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:50,895 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 72% 179/250 [00:36<00:14,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:51,105 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 72% 180/250 [00:36<00:14,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:51,311 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 72% 181/250 [00:36<00:14,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:51,511 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 73% 182/250 [00:37<00:13,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:51,715 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 73% 183/250 [00:37<00:13,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:51,918 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 74% 184/250 [00:37<00:13,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:52,122 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 74% 185/250 [00:37<00:13,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:52,331 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 74% 186/250 [00:37<00:12,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:52,528 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 75% 187/250 [00:38<00:12,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:52,733 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 75% 188/250 [00:38<00:12,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:52,935 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 76% 189/250 [00:38<00:12,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:53,139 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 76% 190/250 [00:38<00:12,  4.95it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:53,338 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 76% 191/250 [00:38<00:11,  4.96it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:53,539 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 77% 192/250 [00:39<00:11,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:53,743 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 77% 193/250 [00:39<00:11,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:53,949 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 78% 194/250 [00:39<00:11,  4.83it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:54,165 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 78% 195/250 [00:39<00:11,  4.83it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:54,371 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 78% 196/250 [00:39<00:11,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:54,575 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 79% 197/250 [00:40<00:10,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:54,775 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 79% 198/250 [00:40<00:10,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:54,983 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 80% 199/250 [00:40<00:10,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:55,181 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 80% 200/250 [00:40<00:10,  4.97it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:55,378 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 80% 201/250 [00:40<00:09,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:55,583 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 81% 202/250 [00:41<00:09,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:55,786 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 81% 203/250 [00:41<00:09,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:55,999 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 82% 204/250 [00:41<00:10,  4.56it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:56,250 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 82% 205/250 [00:41<00:09,  4.65it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:56,455 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 82% 206/250 [00:42<00:09,  4.71it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:56,661 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 83% 207/250 [00:42<00:08,  4.79it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:56,862 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 83% 208/250 [00:42<00:08,  4.69it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:57,085 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 84% 209/250 [00:42<00:08,  4.75it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:57,289 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 84% 210/250 [00:42<00:08,  4.80it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:57,493 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 84% 211/250 [00:43<00:08,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:57,692 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 85% 212/250 [00:43<00:07,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:57,897 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 85% 213/250 [00:43<00:07,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:58,107 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 86% 214/250 [00:43<00:07,  4.88it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:58,308 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 86% 215/250 [00:43<00:07,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:58,516 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 86% 216/250 [00:44<00:06,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:58,715 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 87% 217/250 [00:44<00:06,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:58,917 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 87% 218/250 [00:44<00:06,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:59,125 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 88% 219/250 [00:44<00:06,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:59,329 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 88% 220/250 [00:44<00:06,  4.93it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:59,528 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 88% 221/250 [00:45<00:05,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:59,733 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 89% 222/250 [00:45<00:05,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:21:59,936 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 89% 223/250 [00:45<00:05,  4.94it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:00,137 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 90% 224/250 [00:45<00:05,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:00,345 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 90% 225/250 [00:45<00:05,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:00,549 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 90% 226/250 [00:46<00:04,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:00,758 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 91% 227/250 [00:46<00:04,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:00,956 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 91% 228/250 [00:46<00:04,  4.96it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:01,154 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 92% 229/250 [00:46<00:04,  4.98it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:01,353 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 92% 230/250 [00:46<00:04,  4.92it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:01,561 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 92% 231/250 [00:47<00:03,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:01,767 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 93% 232/250 [00:47<00:03,  4.86it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:01,977 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 93% 233/250 [00:47<00:03,  4.83it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:02,188 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 94% 234/250 [00:47<00:03,  4.88it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:02,387 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 94% 235/250 [00:47<00:03,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:02,591 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 94% 236/250 [00:48<00:02,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:02,792 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 95% 237/250 [00:48<00:02,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:02,999 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 95% 238/250 [00:48<00:02,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:03,211 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 96% 239/250 [00:48<00:02,  4.87it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:03,413 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 96% 240/250 [00:48<00:02,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:03,614 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 96% 241/250 [00:49<00:01,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:03,816 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 97% 242/250 [00:49<00:01,  4.89it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:04,023 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 97% 243/250 [00:49<00:01,  4.88it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:04,229 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 98% 244/250 [00:49<00:01,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:04,429 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 98% 245/250 [00:50<00:01,  4.84it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:04,644 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 98% 246/250 [00:50<00:00,  4.82it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:04,853 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 99% 247/250 [00:50<00:00,  4.85it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:05,056 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


 99% 248/250 [00:50<00:00,  4.91it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:05,254 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


100% 249/250 [00:50<00:00,  4.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:05,460 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}


100% 250/250 [00:51<00:00,  4.93it/s]
{'eval_loss': 0.43915173411369324, 'eval_bleu': 0.0, 'eval_accuracy': 1.0, 'eval_gen_len': 2.0, 'eval_runtime': 51.3996, 'eval_samples_per_second': 38.911, 'eval_steps_per_second': 4.864, 'epoch': 1.25}

100% 2500/2500 [44:34<00:00,  3.50it/s]
                                     [INFO|trainer.py:2709] 2023-02-15 18:22:05,804 >> Saving model checkpoint to t5_large_out/t5_v1_1/checkpoint-2500
[INFO|configuration_utils.py:453] 2023-02-15 18:22:05,805 >> Configuration saved in t5_large_out/t5_v1_1/checkpoint-2500/config.json
[INFO|configuration_utils.py:336] 2023-02-15 18:22:05,810 >> Configuration saved in t5_large_out/t5_v1_1/checkpoint-2500/generation_config.json
[INFO|modeling_utils.py:1704] 2023-02-15 18:22:11,133 >> Model weights saved in t5_large_out/t5_v1_1/checkpoint-2500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 18:22:11,134 >> tokenizer config file saved in t5_large_out/t5_v1_1/checkpoint-2500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 18:22:11,134 >> Special tokens file saved in t5_large_out/t5_v1_1/checkpoint-2500/special_tokens_map.json
[INFO|tokenization_t5_fast.py:186] 2023-02-15 18:22:11,171 >> Copy vocab file to t5_large_out/t5_v1_1/checkpoint-2500/spiece.model
[INFO|trainer.py:1901] 2023-02-15 18:22:22,537 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)


[INFO|trainer.py:2025] 2023-02-15 18:22:22,537 >> Loading best model from t5_large_out/t5_v1_1/checkpoint-500 (score: 1.0).
{'train_runtime': 2694.094, 'train_samples_per_second': 7.424, 'train_steps_per_second': 0.928, 'train_loss': 4.872698245239258, 'epoch': 1.25}
100% 2500/2500 [44:54<00:00,  1.08s/it]
[INFO|trainer.py:2709] 2023-02-15 18:22:25,217 >> Saving model checkpoint to t5_large_out/t5_v1_1
[INFO|configuration_utils.py:453] 2023-02-15 18:22:25,218 >> Configuration saved in t5_large_out/t5_v1_1/config.json
[INFO|configuration_utils.py:336] 2023-02-15 18:22:25,223 >> Configuration saved in t5_large_out/t5_v1_1/generation_config.json
[INFO|modeling_utils.py:1704] 2023-02-15 18:22:30,856 >> Model weights saved in t5_large_out/t5_v1_1/pytorch_model.bin
[INFO|tokenization_utils_base.py:2160] 2023-02-15 18:22:30,857 >> tokenizer config file saved in t5_large_out/t5_v1_1/tokenizer_config.json
[INFO|tokenization_utils_base.py:2167] 2023-02-15 18:22:30,857 >> Special tokens file saved in t5_large_out/t5_v1_1/special_tokens_map.json
[INFO|tokenization_t5_fast.py:186] 2023-02-15 18:22:30,902 >> Copy vocab file to t5_large_out/t5_v1_1/spiece.model
***** train metrics *****
  epoch                    =       1.25
  train_loss               =     4.8727
  train_runtime            = 0:44:54.09
  train_samples            =      16000
  train_samples_per_second =      7.424
  train_steps_per_second   =      0.928
INFO:__main__:*** Evaluate ***
[INFO|trainer.py:2964] 2023-02-15 18:22:30,914 >> ***** Running Evaluation *****
[INFO|trainer.py:2966] 2023-02-15 18:22:30,914 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 18:22:30,914 >>   Batch size = 8
[INFO|configuration_utils.py:543] 2023-02-15 18:22:30,926 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  0% 0/250 [00:00<?, ?it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:36,220 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  1% 2/250 [00:00<01:25,  2.90it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:36,909 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  1% 3/250 [00:00<01:15,  3.29it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:37,156 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  2% 4/250 [00:01<01:45,  2.33it/s][INFO|configuration_utils.py:543] 2023-02-15 18:22:37,797 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  2% 5/250 [00:04<05:17,  1.30s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:40,717 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  2% 6/250 [00:05<05:31,  1.36s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:42,206 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  3% 7/250 [00:06<04:20,  1.07s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:42,672 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  3% 8/250 [00:10<08:07,  2.02s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:46,742 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  4% 9/250 [00:11<06:45,  1.68s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:47,686 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  4% 10/250 [00:13<07:45,  1.94s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:50,208 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  4% 11/250 [00:14<05:55,  1.49s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:50,667 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  5% 12/250 [00:16<06:21,  1.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:52,536 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  5% 13/250 [00:20<09:09,  2.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:56,496 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  6% 14/250 [00:22<09:27,  2.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:22:59,106 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  6% 15/250 [00:25<09:24,  2.40s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:01,497 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  6% 16/250 [00:25<07:02,  1.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:01,918 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  7% 17/250 [00:30<10:35,  2.73s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:06,794 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  7% 18/250 [00:30<07:48,  2.02s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:07,168 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  8% 19/250 [00:35<10:58,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:11,952 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  8% 20/250 [00:36<08:22,  2.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:12,587 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  8% 21/250 [00:38<08:28,  2.22s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:14,886 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  9% 22/250 [00:38<06:14,  1.64s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:15,182 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  9% 23/250 [00:39<04:43,  1.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:15,513 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 10% 24/250 [00:40<05:01,  1.33s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:17,039 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 10% 25/250 [00:42<05:32,  1.48s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:18,858 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 10% 26/250 [00:46<08:20,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:22,857 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 11% 27/250 [00:51<11:32,  3.10s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:27,989 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 11% 28/250 [00:55<12:37,  3.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:32,115 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 12% 29/250 [00:57<10:01,  2.72s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:33,234 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 12% 30/250 [00:58<08:49,  2.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:34,908 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 12% 31/250 [00:59<07:34,  2.08s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:36,212 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 13% 32/250 [01:01<07:14,  1.99s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:38,011 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 13% 33/250 [01:02<05:31,  1.53s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:38,446 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 14% 34/250 [01:05<07:51,  2.18s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:42,155 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 14% 35/250 [01:07<06:51,  1.91s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:43,445 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 14% 36/250 [01:11<09:13,  2.59s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:47,599 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 15% 37/250 [01:15<10:52,  3.06s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:51,779 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 15% 38/250 [01:17<10:04,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:54,135 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 16% 39/250 [01:20<09:51,  2.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:23:56,832 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 16% 40/250 [01:25<11:40,  3.33s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:01,398 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 16% 41/250 [01:30<13:29,  3.87s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:06,531 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 17% 42/250 [01:31<10:46,  3.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:07,861 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 17% 43/250 [01:33<09:24,  2.72s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:09,686 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 18% 44/250 [01:38<11:42,  3.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:14,702 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 18% 45/250 [01:42<12:02,  3.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:18,482 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 18% 46/250 [01:44<10:51,  3.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:20,904 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 19% 47/250 [01:47<09:54,  2.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:23,224 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 19% 48/250 [01:47<07:22,  2.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:23,686 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 20% 49/250 [01:52<10:20,  3.09s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:28,871 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 20% 50/250 [01:57<12:28,  3.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:34,133 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 20% 51/250 [02:03<13:51,  4.18s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:39,329 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 21% 52/250 [02:06<12:35,  3.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:42,296 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 21% 53/250 [02:07<09:57,  3.03s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:43,508 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 22% 54/250 [02:09<09:05,  2.78s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:45,704 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 22% 55/250 [02:12<09:29,  2.92s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:48,946 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 22% 56/250 [02:13<07:11,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:49,551 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 23% 57/250 [02:18<09:32,  2.96s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:54,238 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 23% 58/250 [02:18<07:25,  2.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:55,051 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 24% 59/250 [02:22<08:17,  2.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:58,315 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 24% 60/250 [02:23<06:37,  2.09s/it][INFO|configuration_utils.py:543] 2023-02-15 18:24:59,222 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 24% 61/250 [02:24<05:42,  1.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:00,379 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 25% 62/250 [02:24<04:38,  1.48s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:01,089 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 25% 63/250 [02:26<04:53,  1.57s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:02,859 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 26% 64/250 [02:31<08:05,  2.61s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:07,893 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 26% 65/250 [02:32<06:06,  1.98s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:08,420 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 26% 66/250 [02:32<04:53,  1.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:09,109 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 27% 67/250 [02:33<03:37,  1.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:09,356 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 27% 68/250 [02:38<07:15,  2.39s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:14,560 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 28% 69/250 [02:39<06:30,  2.16s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:16,158 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 28% 70/250 [02:40<04:44,  1.58s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:16,401 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 28% 71/250 [02:41<04:53,  1.64s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:18,172 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 29% 72/250 [02:43<05:08,  1.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:20,134 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 29% 73/250 [02:48<07:21,  2.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:24,404 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 30% 74/250 [02:52<09:09,  3.12s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:28,993 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 30% 75/250 [02:53<07:20,  2.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:30,090 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 30% 76/250 [02:54<06:03,  2.09s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:31,176 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 31% 77/250 [02:56<05:15,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:32,380 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 31% 78/250 [02:57<04:52,  1.70s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:33,802 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 32% 79/250 [02:58<04:33,  1.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:35,160 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 32% 80/250 [03:03<07:13,  2.55s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:39,933 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 32% 81/250 [03:07<08:34,  3.04s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:44,122 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 33% 82/250 [03:08<06:20,  2.27s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:44,579 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 33% 83/250 [03:09<05:18,  1.91s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:45,646 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 34% 84/250 [03:13<07:01,  2.54s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:49,662 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 34% 85/250 [03:17<08:16,  3.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:53,774 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 34% 86/250 [03:17<05:57,  2.18s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:54,017 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 35% 87/250 [03:18<04:55,  1.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:54,962 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 35% 88/250 [03:20<05:14,  1.94s/it][INFO|configuration_utils.py:543] 2023-02-15 18:25:57,200 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 36% 89/250 [03:26<07:46,  2.90s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:02,331 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 36% 90/250 [03:26<05:57,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:03,018 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 36% 91/250 [03:29<06:17,  2.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:05,729 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 37% 92/250 [03:33<07:35,  2.88s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:09,784 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 37% 93/250 [03:33<05:27,  2.09s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:10,026 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 38% 94/250 [03:39<07:53,  3.04s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:15,277 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 38% 95/250 [03:42<08:02,  3.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:18,573 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 38% 96/250 [03:43<06:09,  2.40s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:19,299 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 39% 97/250 [03:47<08:02,  3.15s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:24,217 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 39% 98/250 [03:53<09:37,  3.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:29,516 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 40% 99/250 [03:53<07:03,  2.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:30,002 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 40% 100/250 [03:57<07:56,  3.17s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:34,038 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 40% 101/250 [04:03<09:23,  3.78s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:39,229 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 41% 102/250 [04:07<09:49,  3.98s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:43,687 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 41% 103/250 [04:09<08:39,  3.54s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:46,179 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 42% 104/250 [04:12<07:49,  3.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:48,645 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 42% 105/250 [04:15<07:18,  3.03s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:51,228 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 42% 106/250 [04:16<05:55,  2.47s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:52,392 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 43% 107/250 [04:21<07:49,  3.28s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:57,574 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 43% 108/250 [04:22<06:17,  2.66s/it][INFO|configuration_utils.py:543] 2023-02-15 18:26:58,783 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 44% 109/250 [04:24<05:59,  2.55s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:01,073 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 44% 110/250 [04:25<04:47,  2.05s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:01,973 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 44% 111/250 [04:26<03:36,  1.56s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:02,368 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 45% 112/250 [04:29<05:06,  2.22s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:06,136 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 45% 113/250 [04:33<06:17,  2.76s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:10,142 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 46% 114/250 [04:36<06:12,  2.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:12,850 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 46% 115/250 [04:38<05:16,  2.35s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:14,270 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 46% 116/250 [04:42<06:46,  3.03s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:18,913 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 47% 117/250 [04:45<06:29,  2.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:21,602 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 47% 118/250 [04:48<06:25,  2.92s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:24,489 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 48% 119/250 [04:52<07:20,  3.37s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:28,902 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 48% 120/250 [04:55<06:40,  3.08s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:31,325 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 48% 121/250 [04:55<04:56,  2.30s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:31,798 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 49% 122/250 [04:58<05:18,  2.49s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:34,738 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 49% 123/250 [04:59<04:10,  1.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:35,503 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 50% 124/250 [05:00<03:35,  1.71s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:36,595 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 50% 125/250 [05:00<02:49,  1.35s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:37,116 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 50% 126/250 [05:06<05:07,  2.48s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:42,228 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 51% 127/250 [05:06<04:03,  1.98s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:43,036 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 51% 128/250 [05:07<03:02,  1.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:43,415 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 52% 129/250 [05:08<02:44,  1.36s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:44,444 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 52% 130/250 [05:12<04:31,  2.26s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:48,816 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 52% 131/250 [05:13<03:36,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:49,601 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 53% 132/250 [05:18<05:30,  2.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:54,706 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 53% 133/250 [05:19<04:38,  2.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:56,101 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 54% 134/250 [05:21<04:16,  2.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:27:57,912 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 54% 135/250 [05:23<04:13,  2.20s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:00,089 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 54% 136/250 [05:27<04:45,  2.51s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:03,308 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 55% 137/250 [05:30<05:00,  2.66s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:06,329 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 55% 138/250 [05:30<03:51,  2.07s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:07,020 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 56% 139/250 [05:32<03:33,  1.92s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:08,608 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 56% 140/250 [05:34<03:34,  1.95s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:10,610 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 56% 141/250 [05:37<03:59,  2.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:13,374 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 57% 142/250 [05:37<03:00,  1.67s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:13,835 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 57% 143/250 [05:38<02:34,  1.44s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:14,746 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 58% 144/250 [05:40<02:54,  1.64s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:16,856 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 58% 145/250 [05:41<02:19,  1.33s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:17,459 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 58% 146/250 [05:42<02:29,  1.44s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:19,147 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 59% 147/250 [05:44<02:35,  1.51s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:20,814 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 59% 148/250 [05:45<02:04,  1.22s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:21,370 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 60% 149/250 [05:47<02:22,  1.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:23,235 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 60% 150/250 [05:48<02:32,  1.53s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:25,026 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 60% 151/250 [05:49<02:13,  1.35s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:25,968 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 61% 152/250 [05:51<02:26,  1.49s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:27,789 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 61% 153/250 [05:55<03:41,  2.29s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:31,933 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 62% 154/250 [06:00<05:02,  3.16s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:37,113 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 62% 155/250 [06:03<04:30,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:39,243 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 62% 156/250 [06:07<05:03,  3.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:43,365 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 63% 157/250 [06:09<04:27,  2.88s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:45,431 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 63% 158/250 [06:10<03:44,  2.44s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:46,858 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 64% 159/250 [06:11<03:10,  2.10s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:48,148 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 64% 160/250 [06:16<04:15,  2.84s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:52,721 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 64% 161/250 [06:18<03:57,  2.67s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:54,988 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 65% 162/250 [06:21<03:56,  2.69s/it][INFO|configuration_utils.py:543] 2023-02-15 18:28:57,728 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 65% 163/250 [06:24<04:08,  2.86s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:00,980 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 66% 164/250 [06:29<05:03,  3.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:06,057 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 66% 165/250 [06:34<05:39,  4.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:11,166 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 66% 166/250 [06:37<05:02,  3.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:13,819 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 67% 167/250 [06:38<03:43,  2.70s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:14,422 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 67% 168/250 [06:39<03:09,  2.31s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:15,844 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 68% 169/250 [06:44<04:12,  3.12s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:20,840 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 68% 170/250 [06:45<03:15,  2.44s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:21,702 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 68% 171/250 [06:49<03:57,  3.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:26,012 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 69% 172/250 [06:50<03:07,  2.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:27,032 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 69% 173/250 [06:51<02:33,  1.99s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:28,062 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 70% 174/250 [06:52<01:51,  1.47s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:28,314 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 70% 175/250 [06:53<01:39,  1.33s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:29,299 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 70% 176/250 [06:53<01:23,  1.13s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:29,982 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 71% 177/250 [06:54<01:22,  1.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:31,125 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 71% 178/250 [06:56<01:32,  1.29s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:32,766 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 72% 179/250 [06:57<01:17,  1.09s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:33,406 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 72% 180/250 [07:01<02:23,  2.05s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:37,682 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 72% 181/250 [07:01<01:48,  1.57s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:38,149 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 73% 182/250 [07:05<02:33,  2.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:41,988 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 73% 183/250 [07:07<02:29,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:44,150 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 74% 184/250 [07:11<03:02,  2.77s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:48,183 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 74% 185/250 [07:12<02:11,  2.02s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:48,470 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 74% 186/250 [07:17<03:09,  2.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:53,638 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 75% 187/250 [07:19<02:55,  2.78s/it][INFO|configuration_utils.py:543] 2023-02-15 18:29:55,982 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 75% 188/250 [07:24<03:35,  3.47s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:01,063 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 76% 189/250 [07:26<02:58,  2.92s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:02,704 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 76% 190/250 [07:29<02:57,  2.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:05,770 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 76% 191/250 [07:34<03:30,  3.58s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:10,772 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 77% 192/250 [07:38<03:40,  3.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:15,128 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 77% 193/250 [07:39<02:40,  2.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:15,639 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 78% 194/250 [07:44<03:16,  3.51s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:20,763 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 78% 195/250 [07:48<03:13,  3.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:24,302 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 78% 196/250 [07:53<03:37,  4.03s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:29,513 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 79% 197/250 [07:57<03:43,  4.22s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:34,181 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 79% 198/250 [07:59<02:50,  3.28s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:35,260 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 80% 199/250 [07:59<02:06,  2.49s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:35,904 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 80% 200/250 [08:03<02:20,  2.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:39,490 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 80% 201/250 [08:04<01:48,  2.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:40,295 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 81% 202/250 [08:07<02:02,  2.56s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:43,653 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 81% 203/250 [08:09<01:59,  2.53s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:46,129 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 82% 204/250 [08:11<01:38,  2.15s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:47,390 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 82% 205/250 [08:11<01:14,  1.65s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:47,868 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 82% 206/250 [08:13<01:20,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:50,103 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 83% 207/250 [08:14<01:08,  1.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:51,179 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 83% 208/250 [08:15<00:56,  1.34s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:51,929 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 84% 209/250 [08:18<01:17,  1.89s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:55,081 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 84% 210/250 [08:19<00:56,  1.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:55,376 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 84% 211/250 [08:20<00:57,  1.48s/it][INFO|configuration_utils.py:543] 2023-02-15 18:30:57,033 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 85% 212/250 [08:25<01:28,  2.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:01,313 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 85% 213/250 [08:29<01:43,  2.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:05,249 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 86% 214/250 [08:32<01:48,  3.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:08,727 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 86% 215/250 [08:33<01:19,  2.29s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:09,326 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 86% 216/250 [08:34<01:04,  1.89s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:10,289 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 87% 217/250 [08:37<01:14,  2.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:13,369 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 87% 218/250 [08:40<01:24,  2.65s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:16,951 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 88% 219/250 [08:44<01:36,  3.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:21,158 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 88% 220/250 [08:49<01:50,  3.69s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:26,181 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 88% 221/250 [08:50<01:19,  2.75s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:26,734 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 89% 222/250 [08:52<01:09,  2.47s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:28,574 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 89% 223/250 [08:56<01:20,  2.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:32,694 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 90% 224/250 [08:56<00:56,  2.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:33,072 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 90% 225/250 [08:57<00:45,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:34,033 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 90% 226/250 [09:03<01:08,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:39,296 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 91% 227/250 [09:06<01:12,  3.15s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:43,139 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 91% 228/250 [09:09<01:06,  3.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:45,794 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 92% 229/250 [09:14<01:16,  3.62s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:50,871 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 92% 230/250 [09:17<01:07,  3.37s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:53,638 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 92% 231/250 [09:22<01:11,  3.75s/it][INFO|configuration_utils.py:543] 2023-02-15 18:31:58,280 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 93% 232/250 [09:26<01:12,  4.04s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:02,988 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 93% 233/250 [09:29<01:00,  3.53s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:05,334 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 94% 234/250 [09:29<00:43,  2.71s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:06,141 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 94% 235/250 [09:34<00:51,  3.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:11,186 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 94% 236/250 [09:35<00:35,  2.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:11,569 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 95% 237/250 [09:39<00:39,  3.04s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:15,852 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 95% 238/250 [09:42<00:35,  2.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:18,518 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 96% 239/250 [09:47<00:38,  3.53s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:23,449 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 96% 240/250 [09:47<00:26,  2.64s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:24,023 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 96% 241/250 [09:49<00:21,  2.43s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:25,976 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 97% 242/250 [09:53<00:23,  2.92s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:30,032 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 97% 243/250 [09:54<00:16,  2.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:30,936 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 98% 244/250 [09:55<00:10,  1.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:31,533 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 98% 245/250 [09:55<00:06,  1.33s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:31,779 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 98% 246/250 [09:56<00:04,  1.16s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:32,539 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 99% 247/250 [09:59<00:05,  1.70s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:35,501 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 99% 248/250 [10:03<00:04,  2.41s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:39,570 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

100% 249/250 [10:06<00:02,  2.62s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:42,691 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

100% 250/250 [10:08<00:00,  2.43s/it]
***** eval metrics *****
  epoch                   =       1.25
  eval_accuracy           =        1.0
  eval_bleu               =     0.0076
  eval_gen_len            =     12.767
  eval_loss               =     9.8625
  eval_runtime            = 0:10:14.04
  eval_samples            =       2000
  eval_samples_per_second =      3.257
  eval_steps_per_second   =      0.407
INFO:__main__:*** Predict ***
[INFO|trainer.py:2964] 2023-02-15 18:32:44,958 >> ***** Running Prediction *****
[INFO|trainer.py:2966] 2023-02-15 18:32:44,959 >>   Num examples = 2000
[INFO|trainer.py:2969] 2023-02-15 18:32:44,959 >>   Batch size = 8
[INFO|configuration_utils.py:543] 2023-02-15 18:32:44,969 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  0% 0/250 [00:00<?, ?it/s][INFO|configuration_utils.py:543] 2023-02-15 18:32:45,349 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  1% 2/250 [00:02<05:01,  1.22s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:47,784 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  1% 3/250 [00:04<06:16,  1.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:49,736 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  2% 4/250 [00:05<06:00,  1.47s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:51,107 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  2% 5/250 [00:10<10:41,  2.62s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:55,887 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  2% 6/250 [00:11<07:57,  1.96s/it][INFO|configuration_utils.py:543] 2023-02-15 18:32:56,510 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  3% 7/250 [00:16<12:04,  2.98s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:01,659 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  3% 8/250 [00:16<09:05,  2.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:02,323 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  4% 9/250 [00:17<07:00,  1.75s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:02,943 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  4% 10/250 [00:19<07:15,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:04,913 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  4% 11/250 [00:24<11:14,  2.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:10,025 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  5% 12/250 [00:24<08:05,  2.04s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:10,273 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  5% 13/250 [00:26<07:38,  1.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:11,958 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  6% 14/250 [00:30<10:23,  2.64s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:16,237 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  6% 15/250 [00:32<08:57,  2.29s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:17,700 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  6% 16/250 [00:35<09:32,  2.45s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:20,516 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  7% 17/250 [00:37<09:17,  2.39s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:22,780 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  7% 18/250 [00:41<10:44,  2.78s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:26,465 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  8% 19/250 [00:43<10:11,  2.65s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:28,807 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  8% 20/250 [00:48<13:11,  3.44s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:34,100 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  8% 21/250 [00:51<12:39,  3.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:37,127 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  9% 22/250 [00:55<13:03,  3.44s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:40,836 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

  9% 23/250 [01:00<14:51,  3.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:45,911 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 10% 24/250 [01:02<12:41,  3.37s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:47,985 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 10% 25/250 [01:05<11:55,  3.18s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:50,717 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 10% 26/250 [01:08<11:29,  3.08s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:53,563 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 11% 27/250 [01:10<10:53,  2.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:56,150 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 11% 28/250 [01:11<08:09,  2.20s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:56,653 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 12% 29/250 [01:12<06:28,  1.76s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:57,366 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 12% 30/250 [01:12<05:34,  1.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:58,335 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 12% 31/250 [01:13<04:20,  1.19s/it][INFO|configuration_utils.py:543] 2023-02-15 18:33:58,757 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 13% 32/250 [01:13<03:29,  1.04it/s][INFO|configuration_utils.py:543] 2023-02-15 18:33:59,185 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 13% 33/250 [01:14<03:02,  1.19it/s][INFO|configuration_utils.py:543] 2023-02-15 18:33:59,743 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 14% 34/250 [01:18<06:32,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:03,836 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 14% 35/250 [01:19<05:12,  1.45s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:04,435 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 14% 36/250 [01:20<05:28,  1.54s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:06,173 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 15% 37/250 [01:25<08:27,  2.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:10,521 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 15% 38/250 [01:26<06:57,  1.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:11,524 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 16% 39/250 [01:30<09:27,  2.69s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:15,891 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 16% 40/250 [01:31<07:22,  2.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:16,639 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 16% 41/250 [01:31<05:34,  1.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:17,067 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 17% 42/250 [01:36<09:11,  2.65s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:22,170 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 17% 43/250 [01:37<06:47,  1.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:22,549 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 18% 44/250 [01:41<08:55,  2.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:26,611 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 18% 45/250 [01:46<11:20,  3.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:31,609 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 18% 46/250 [01:48<10:05,  2.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:33,755 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 19% 47/250 [01:52<11:10,  3.30s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:37,840 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 19% 48/250 [01:54<10:06,  3.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:40,138 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 20% 49/250 [01:55<08:13,  2.45s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:41,317 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 20% 50/250 [01:56<05:58,  1.79s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:41,565 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 20% 51/250 [02:01<09:17,  2.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:46,720 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 21% 52/250 [02:03<08:26,  2.56s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:48,708 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 21% 53/250 [02:05<08:21,  2.55s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:51,233 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 22% 54/250 [02:07<07:21,  2.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:52,791 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 22% 55/250 [02:11<08:43,  2.68s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:56,485 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 22% 56/250 [02:11<06:27,  2.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:34:56,876 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 23% 57/250 [02:15<08:06,  2.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:00,623 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 23% 58/250 [02:17<07:25,  2.32s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:02,477 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 24% 59/250 [02:19<07:09,  2.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:04,552 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 24% 60/250 [02:23<08:50,  2.79s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:08,623 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 24% 61/250 [02:24<07:08,  2.27s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:09,665 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 25% 62/250 [02:25<05:42,  1.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:10,444 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 25% 63/250 [02:27<06:32,  2.10s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:13,190 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 26% 64/250 [02:31<08:17,  2.68s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:17,210 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 26% 65/250 [02:36<10:00,  3.24s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:21,783 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 26% 66/250 [02:38<08:44,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:23,707 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 27% 67/250 [02:38<06:31,  2.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:24,185 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 27% 68/250 [02:39<05:12,  1.72s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:24,916 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 28% 69/250 [02:44<07:55,  2.62s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:29,661 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 28% 70/250 [02:46<07:21,  2.45s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:31,708 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 28% 71/250 [02:51<09:45,  3.27s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:36,884 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 29% 72/250 [02:52<07:26,  2.51s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:37,621 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 29% 73/250 [02:53<05:51,  1.99s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:38,392 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 30% 74/250 [02:54<05:10,  1.77s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:39,639 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 30% 75/250 [02:57<06:16,  2.15s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:42,698 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 30% 76/250 [02:58<05:18,  1.83s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:43,774 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 31% 77/250 [02:58<04:06,  1.42s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:44,251 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 31% 78/250 [03:00<03:58,  1.39s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:45,555 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 32% 79/250 [03:02<05:05,  1.79s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:48,277 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 32% 80/250 [03:03<04:05,  1.45s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:48,921 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 32% 81/250 [03:04<03:30,  1.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:49,702 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 33% 82/250 [03:07<04:54,  1.75s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:52,641 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 33% 83/250 [03:07<03:43,  1.34s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:53,019 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 34% 84/250 [03:08<03:23,  1.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:53,982 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 34% 85/250 [03:12<05:20,  1.94s/it][INFO|configuration_utils.py:543] 2023-02-15 18:35:57,590 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 34% 86/250 [03:17<07:38,  2.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:02,377 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 35% 87/250 [03:19<07:43,  2.84s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:05,336 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 35% 88/250 [03:22<07:01,  2.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:07,378 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 36% 89/250 [03:25<07:45,  2.89s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:10,941 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 36% 90/250 [03:26<05:52,  2.20s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:11,542 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 36% 91/250 [03:30<07:20,  2.77s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:15,628 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 37% 92/250 [03:32<07:04,  2.69s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:18,125 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 37% 93/250 [03:34<06:02,  2.31s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:19,543 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 38% 94/250 [03:35<04:53,  1.88s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:20,435 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 38% 95/250 [03:39<07:04,  2.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:25,178 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 38% 96/250 [03:44<08:40,  3.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:30,052 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 39% 97/250 [03:47<07:59,  3.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:32,615 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 39% 98/250 [03:49<07:37,  3.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:35,322 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 40% 99/250 [03:50<05:36,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:35,742 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 40% 100/250 [03:50<04:20,  1.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:36,336 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 40% 101/250 [03:56<06:49,  2.75s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:41,429 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 41% 102/250 [04:00<08:03,  3.26s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:45,905 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 41% 103/250 [04:00<05:53,  2.40s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:46,292 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 42% 104/250 [04:01<04:19,  1.78s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:46,622 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 42% 105/250 [04:04<05:08,  2.13s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:49,563 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 42% 106/250 [04:07<05:36,  2.34s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:52,394 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 43% 107/250 [04:09<05:19,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:54,376 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 43% 108/250 [04:14<07:29,  3.16s/it][INFO|configuration_utils.py:543] 2023-02-15 18:36:59,715 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 44% 109/250 [04:16<06:40,  2.84s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:01,799 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 44% 110/250 [04:18<05:49,  2.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:03,499 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 44% 111/250 [04:19<04:56,  2.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:04,791 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 45% 112/250 [04:19<03:43,  1.62s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:05,212 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 45% 113/250 [04:24<06:05,  2.67s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:10,328 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 46% 114/250 [04:26<05:34,  2.46s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:12,296 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 46% 115/250 [04:28<04:36,  2.05s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:13,389 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 46% 116/250 [04:28<03:27,  1.55s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:13,762 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 47% 117/250 [04:31<04:14,  1.92s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:16,540 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 47% 118/250 [04:35<05:39,  2.57s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:20,649 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 48% 119/250 [04:37<05:12,  2.39s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:22,601 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 48% 120/250 [04:40<05:54,  2.73s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:26,132 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 48% 121/250 [04:42<05:31,  2.57s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:28,320 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 49% 122/250 [04:44<04:54,  2.30s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:30,008 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 49% 123/250 [04:48<05:38,  2.66s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:33,509 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 50% 124/250 [04:48<04:11,  1.99s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:33,942 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 50% 125/250 [04:49<03:12,  1.54s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:34,422 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 50% 126/250 [04:51<03:55,  1.90s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:37,172 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 51% 127/250 [04:54<04:07,  2.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:39,434 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 51% 128/250 [04:55<03:58,  1.96s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:41,269 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 52% 129/250 [04:59<05:12,  2.58s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:45,310 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 52% 130/250 [05:00<04:12,  2.10s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:46,299 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 52% 131/250 [05:06<05:58,  3.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:51,433 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 53% 132/250 [05:08<05:30,  2.80s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:53,746 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 53% 133/250 [05:13<06:49,  3.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:58,884 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 54% 134/250 [05:14<05:06,  2.64s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:59,507 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 54% 135/250 [05:14<03:46,  1.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:37:59,924 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 54% 136/250 [05:17<04:11,  2.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:02,673 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 55% 137/250 [05:18<03:50,  2.04s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:04,316 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 55% 138/250 [05:21<03:59,  2.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:06,699 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 56% 139/250 [05:23<04:05,  2.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:09,077 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 56% 140/250 [05:26<04:32,  2.48s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:12,172 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 56% 141/250 [05:30<05:10,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:15,883 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 57% 142/250 [05:34<05:42,  3.17s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:19,806 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 57% 143/250 [05:38<06:16,  3.52s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:24,145 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 58% 144/250 [05:40<05:13,  2.96s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:25,783 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 58% 145/250 [05:41<04:13,  2.42s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:26,937 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 58% 146/250 [05:44<04:36,  2.66s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:30,165 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 59% 147/250 [05:45<03:35,  2.09s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:30,935 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 59% 148/250 [05:50<05:04,  2.99s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:36,007 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 60% 149/250 [05:52<04:25,  2.63s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:37,806 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 60% 150/250 [05:53<03:40,  2.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:39,025 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 60% 151/250 [05:58<05:04,  3.08s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:44,143 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 61% 152/250 [05:59<03:46,  2.31s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:44,650 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 61% 153/250 [06:03<04:51,  3.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:49,274 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 62% 154/250 [06:04<03:37,  2.27s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:49,834 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 62% 155/250 [06:09<04:56,  3.12s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:54,925 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 62% 156/250 [06:09<03:35,  2.30s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:55,312 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 63% 157/250 [06:10<02:40,  1.72s/it][INFO|configuration_utils.py:543] 2023-02-15 18:38:55,695 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 63% 158/250 [06:15<04:11,  2.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:00,794 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 64% 159/250 [06:19<04:53,  3.22s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:05,153 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 64% 160/250 [06:20<03:33,  2.37s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:05,528 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 64% 161/250 [06:20<02:37,  1.77s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:05,906 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 65% 162/250 [06:21<02:18,  1.58s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:07,035 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 65% 163/250 [06:24<02:51,  1.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:09,927 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 66% 164/250 [06:24<02:08,  1.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:10,311 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 66% 165/250 [06:28<02:50,  2.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:13,498 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 66% 166/250 [06:30<02:45,  1.97s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:15,390 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 67% 167/250 [06:31<02:23,  1.73s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:16,571 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 67% 168/250 [06:36<03:41,  2.70s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:21,515 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 68% 169/250 [06:38<03:40,  2.73s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:24,309 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 68% 170/250 [06:43<04:11,  3.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:28,427 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 68% 171/250 [06:45<03:57,  3.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:31,112 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 69% 172/250 [06:50<04:23,  3.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:35,352 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 69% 173/250 [06:50<03:15,  2.54s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:35,949 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 70% 174/250 [06:50<02:22,  1.87s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:36,251 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 70% 175/250 [06:56<03:33,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:41,391 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 70% 176/250 [06:57<03:09,  2.56s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:43,286 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 71% 177/250 [07:00<03:07,  2.57s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:45,872 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 71% 178/250 [07:01<02:24,  2.01s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:46,561 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 72% 179/250 [07:06<03:22,  2.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:51,366 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 72% 180/250 [07:09<03:35,  3.08s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:54,985 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 72% 181/250 [07:11<03:08,  2.73s/it][INFO|configuration_utils.py:543] 2023-02-15 18:39:56,909 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 73% 182/250 [07:16<03:55,  3.47s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:02,094 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 73% 183/250 [07:18<03:14,  2.90s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:03,655 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 74% 184/250 [07:19<02:34,  2.34s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:04,690 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 74% 185/250 [07:24<03:18,  3.06s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:09,424 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 74% 186/250 [07:28<03:49,  3.58s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:14,235 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 75% 187/250 [07:34<04:15,  4.06s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:19,404 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 75% 188/250 [07:35<03:28,  3.37s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:21,160 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 76% 189/250 [07:36<02:45,  2.71s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:22,329 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 76% 190/250 [07:37<02:06,  2.10s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:23,011 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 76% 191/250 [07:37<01:31,  1.56s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:23,302 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 77% 192/250 [07:39<01:35,  1.65s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:25,170 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 77% 193/250 [07:43<02:06,  2.23s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:28,744 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 78% 194/250 [07:45<01:58,  2.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:30,573 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 78% 195/250 [07:49<02:30,  2.73s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:34,755 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 78% 196/250 [07:50<02:01,  2.25s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:35,871 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 79% 197/250 [07:55<02:44,  3.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:40,994 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 79% 198/250 [07:59<02:51,  3.30s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:44,750 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 80% 199/250 [08:02<02:39,  3.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:47,493 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 80% 200/250 [08:07<03:06,  3.72s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:52,589 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 80% 201/250 [08:11<03:03,  3.74s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:56,374 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 81% 202/250 [08:11<02:09,  2.70s/it][INFO|configuration_utils.py:543] 2023-02-15 18:40:56,643 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 81% 203/250 [08:15<02:33,  3.27s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:01,243 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 82% 204/250 [08:19<02:34,  3.36s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:04,797 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 82% 205/250 [08:21<02:14,  2.99s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:06,925 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 82% 206/250 [08:25<02:27,  3.36s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:11,148 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 83% 207/250 [08:26<01:50,  2.58s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:11,912 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 83% 208/250 [08:31<02:14,  3.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:16,587 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 84% 209/250 [08:33<01:55,  2.82s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:18,489 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 84% 210/250 [08:34<01:35,  2.39s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:19,893 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 84% 211/250 [08:38<01:54,  2.93s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:24,068 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 85% 212/250 [08:40<01:41,  2.66s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:26,105 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 85% 213/250 [08:45<01:57,  3.17s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:30,455 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 86% 214/250 [08:45<01:24,  2.36s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:30,918 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 86% 215/250 [08:50<01:44,  3.00s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:35,410 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 86% 216/250 [08:52<01:31,  2.69s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:37,400 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 87% 217/250 [08:52<01:08,  2.06s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:37,988 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 87% 218/250 [08:53<00:50,  1.57s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:38,416 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 88% 219/250 [08:54<00:42,  1.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:39,362 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 88% 220/250 [08:54<00:35,  1.17s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:40,044 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 88% 221/250 [08:55<00:27,  1.07it/s][INFO|configuration_utils.py:543] 2023-02-15 18:41:40,422 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 89% 222/250 [08:56<00:31,  1.11s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:41,937 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 89% 223/250 [08:57<00:31,  1.18s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:43,278 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 90% 224/250 [08:59<00:34,  1.33s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:44,963 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 90% 225/250 [09:00<00:26,  1.07s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:45,422 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 90% 226/250 [09:02<00:34,  1.45s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:47,759 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 91% 227/250 [09:06<00:49,  2.16s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:51,583 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 91% 228/250 [09:06<00:35,  1.63s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:51,959 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 92% 229/250 [09:10<00:50,  2.38s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:56,113 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 92% 230/250 [09:12<00:42,  2.12s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:57,622 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 92% 231/250 [09:12<00:31,  1.65s/it][INFO|configuration_utils.py:543] 2023-02-15 18:41:58,182 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 93% 232/250 [09:17<00:46,  2.60s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:02,986 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 93% 233/250 [09:20<00:47,  2.81s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:06,300 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 94% 234/250 [09:22<00:39,  2.49s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:08,046 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 94% 235/250 [09:23<00:32,  2.14s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:09,348 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 94% 236/250 [09:29<00:42,  3.03s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:14,456 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 95% 237/250 [09:30<00:32,  2.50s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:15,741 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 95% 238/250 [09:30<00:22,  1.87s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:16,133 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 96% 239/250 [09:31<00:16,  1.51s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:16,810 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 96% 240/250 [09:34<00:18,  1.85s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:19,460 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 96% 241/250 [09:34<00:13,  1.48s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:20,072 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 97% 242/250 [09:37<00:14,  1.75s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:22,455 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 97% 243/250 [09:40<00:15,  2.16s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:25,569 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 98% 244/250 [09:43<00:14,  2.35s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:28,357 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 98% 245/250 [09:45<00:11,  2.26s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:30,426 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 98% 246/250 [09:47<00:08,  2.24s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:32,609 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 99% 247/250 [09:49<00:07,  2.34s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:35,183 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

 99% 248/250 [09:51<00:04,  2.21s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:37,079 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

100% 249/250 [09:53<00:02,  2.18s/it][INFO|configuration_utils.py:543] 2023-02-15 18:42:39,206 >> Generate config GenerationConfig {
  "decoder_start_token_id": 0,
  "eos_token_id": 1,
  "pad_token_id": 0,
  "transformers_version": "4.26.1"
}

100% 250/250 [09:56<00:00,  2.39s/it]
***** predict metrics *****
  predict_accuracy           =        1.0
  predict_bleu               =     0.0081
  predict_gen_len            =    12.2785
  predict_loss               =     9.8709
  predict_runtime            = 0:09:57.12
  predict_samples            =       2000
  predict_samples_per_second =      3.349
  predict_steps_per_second   =      0.419
[INFO|modelcard.py:449] 2023-02-15 18:42:42,406 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Translation', 'type': 'translation'}, 'metrics': [{'name': 'Bleu', 'type': 'bleu', 'value': 0.0076}, {'name': 'Accuracy', 'type': 'accuracy', 'value': 1.0}]}

FLAN T5

from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
import json
if torch.cuda.is_available():
    device = 0
else:
    device = -1
def perform_shot_learning(pipeline_type, model_name, test_file):
    class_type = AutoModelForSeq2SeqLM
    model = class_type.from_pretrained(model_name, torch_dtype=torch.float32)
    tokenizer = AutoTokenizer.from_pretrained(model_name)

    our_pipeline = pipeline(pipeline_type, model=model, tokenizer=tokenizer, device=device)

    correct = 0

    labels = "possible labels: sadness, joy, love, anger, fear, surprise"

    with open(test_file) as f:
      f_lines = f.readlines()
      for line in f_lines:
          ex = json.loads(line)
          prompt = ex['text']

          tmp = labels + '\n' + f'text: {prompt}' + '\n' + 'label: '
          
          predict = our_pipeline(tmp, do_sample=False)[0]['generated_text']

          if predict == ex['label']:
            correct += 1

    print(f'Accuracy: {correct/len(f_lines)}')
test_ds = 'data/s2s-test.json'
perform_shot_learning('text2text-generation', 'google/flan-t5-large', test_ds)
/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py:1043: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
  warnings.warn(
Accuracy: 0.647
!zip -r /content/training_cache_roberta_gpt.zip /content/roberta_gpt_cache
!zip -r /content/roberta_gpt_out.zip /content/roberta_gpt_out
  adding: content/roberta_gpt_cache/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/roberta_gpt_cache_roberta_custom_json_default-2fc7d0d25bce81a9_0.0.0_0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51.lock (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/tokenizer.json (deflated 59%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/config.json (deflated 47%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/merges.txt (deflated 53%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/vocab.json (deflated 63%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/snapshots/ff46155979338ff8063cdad90908b498ab91b181/pytorch_model.bin (deflated 41%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/blobs/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/blobs/8db5e7ac5bfc9ec8b613b776009300fe3685d957 (deflated 47%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/blobs/ad0bcbeb288f0d1373d88e0762e66357f55b8311 (deflated 59%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/blobs/278b7a95739c4392fae9b818bb5343dde20be1b89318f37a6d939e1e1b9e461b (deflated 41%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/blobs/226b0752cac7789c48f0cb3ec53eda48b7be36cc (deflated 53%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/blobs/5606f48548d99a9829d10a96cd364b816b02cd21 (deflated 63%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/refs/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/refs/main (deflated 3%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/.no_exist/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/.no_exist/ff46155979338ff8063cdad90908b498ab91b181/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/.no_exist/ff46155979338ff8063cdad90908b498ab91b181/special_tokens_map.json (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/.no_exist/ff46155979338ff8063cdad90908b498ab91b181/added_tokens.json (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/models--roberta-base/.no_exist/ff46155979338ff8063cdad90908b498ab91b181/tokenizer_config.json (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/json/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51.incomplete_info.lock (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/ (stored 0%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-7f5564bde85bb8c0.arrow (deflated 88%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-train.arrow (deflated 64%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-de6cef99ad203f9d.arrow (deflated 88%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-test.arrow (deflated 64%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-validation.arrow (deflated 64%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json (deflated 57%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-138d491b60344550.arrow (deflated 88%)
  adding: content/roberta_gpt_cache/roberta_custom/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51_builder.lock (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/tokenizer.json (deflated 59%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/config.json (deflated 50%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/merges.txt (deflated 53%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/vocab.json (deflated 67%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/snapshots/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/pytorch_model.bin (deflated 16%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/blobs/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/blobs/1f1d9aaca301414e7f6c9396df506798ff4eb9a6 (deflated 67%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/blobs/4b988bccc9dc5adacd403c00b4704976196548f8 (deflated 59%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/blobs/10c66461e4c109db5a2196bff4bb59be30396ed8 (deflated 50%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/blobs/226b0752cac7789c48f0cb3ec53eda48b7be36cc (deflated 53%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/blobs/7c5d3f4b8b76583b422fcb9189ad6c89d5d97a094541ce8932dce3ecabde1421 (deflated 16%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/refs/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/refs/main (deflated 3%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/.no_exist/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/.no_exist/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/.no_exist/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/special_tokens_map.json (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/.no_exist/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/added_tokens.json (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/models--gpt2/.no_exist/e7da7f221d5bf496a48136c0cd264e630fe9fcc8/tokenizer_config.json (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/roberta_gpt_cache_custom_gpt2_json_default-2fc7d0d25bce81a9_0.0.0_0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51.lock (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51.incomplete_info.lock (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/ (stored 0%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-train.arrow (deflated 64%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d96eda4fa0456cad.arrow (deflated 88%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-test.arrow (deflated 64%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-70f9aa9f07dd2d49.arrow (deflated 88%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-validation.arrow (deflated 64%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json (deflated 57%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-bea742f4a59644e1.arrow (deflated 88%)
  adding: content/roberta_gpt_cache/custom_gpt2/json/default-2fc7d0d25bce81a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51_builder.lock (stored 0%)
  adding: content/roberta_gpt_out/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/train_results.json (deflated 40%)
  adding: content/roberta_gpt_out/gpt2_custom/trainer_state.json (deflated 81%)
  adding: content/roberta_gpt_out/gpt2_custom/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/gpt2_custom/README.md (deflated 54%)
  adding: content/roberta_gpt_out/gpt2_custom/all_results.json (deflated 57%)
  adding: content/roberta_gpt_out/gpt2_custom/special_tokens_map.json (deflated 60%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/trainer_state.json (deflated 81%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/special_tokens_map.json (deflated 60%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/config.json (deflated 56%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/scheduler.pt (deflated 50%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/optimizer.pt (deflated 30%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/tokenizer_config.json (deflated 41%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/pytorch_model.bin (deflated 9%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/rng_state.pth (deflated 28%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2500/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/trainer_state.json (deflated 75%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/special_tokens_map.json (deflated 60%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/config.json (deflated 56%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/scheduler.pt (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/optimizer.pt (deflated 30%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/tokenizer_config.json (deflated 41%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/pytorch_model.bin (deflated 9%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/rng_state.pth (deflated 28%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1000/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/trainer_state.json (deflated 77%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/special_tokens_map.json (deflated 60%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/config.json (deflated 56%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/scheduler.pt (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/optimizer.pt (deflated 30%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/tokenizer_config.json (deflated 41%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/pytorch_model.bin (deflated 9%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/rng_state.pth (deflated 28%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-1500/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/runs/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/runs/Feb15_16-43-51_b7fb20e65b38/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/runs/Feb15_16-43-51_b7fb20e65b38/events.out.tfevents.1676479446.b7fb20e65b38.18625.0 (deflated 63%)
  adding: content/roberta_gpt_out/gpt2_custom/runs/Feb15_16-43-51_b7fb20e65b38/1676479446.8678486/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/runs/Feb15_16-43-51_b7fb20e65b38/1676479446.8678486/events.out.tfevents.1676479446.b7fb20e65b38.18625.1 (deflated 63%)
  adding: content/roberta_gpt_out/gpt2_custom/runs/Feb15_16-43-51_b7fb20e65b38/events.out.tfevents.1676479939.b7fb20e65b38.18625.2 (deflated 28%)
  adding: content/roberta_gpt_out/gpt2_custom/config.json (deflated 56%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/trainer_state.json (deflated 79%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/special_tokens_map.json (deflated 60%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/config.json (deflated 56%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/scheduler.pt (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/optimizer.pt (deflated 30%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/tokenizer_config.json (deflated 41%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/pytorch_model.bin (deflated 9%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/rng_state.pth (deflated 28%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-2000/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/ (stored 0%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/trainer_state.json (deflated 67%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/special_tokens_map.json (deflated 60%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/config.json (deflated 56%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/scheduler.pt (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/optimizer.pt (deflated 31%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/tokenizer_config.json (deflated 41%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/pytorch_model.bin (deflated 9%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/rng_state.pth (deflated 28%)
  adding: content/roberta_gpt_out/gpt2_custom/checkpoint-500/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/gpt2_custom/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/gpt2_custom/tokenizer_config.json (deflated 41%)
  adding: content/roberta_gpt_out/gpt2_custom/predict_results_None.txt (deflated 62%)
  adding: content/roberta_gpt_out/gpt2_custom/eval_results.json (deflated 42%)
  adding: content/roberta_gpt_out/gpt2_custom/pytorch_model.bin (deflated 9%)
  adding: content/roberta_gpt_out/gpt2_custom/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/roberta_custom/ (stored 0%)
  adding: content/roberta_gpt_out/roberta_custom/train_results.json (deflated 41%)
  adding: content/roberta_gpt_out/roberta_custom/trainer_state.json (deflated 54%)
  adding: content/roberta_gpt_out/roberta_custom/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/roberta_custom/README.md (deflated 46%)
  adding: content/roberta_gpt_out/roberta_custom/all_results.json (deflated 56%)
  adding: content/roberta_gpt_out/roberta_custom/special_tokens_map.json (deflated 52%)
  adding: content/roberta_gpt_out/roberta_custom/runs/ (stored 0%)
  adding: content/roberta_gpt_out/roberta_custom/runs/Feb15_16-41-51_b7fb20e65b38/ (stored 0%)
  adding: content/roberta_gpt_out/roberta_custom/runs/Feb15_16-41-51_b7fb20e65b38/1676479330.978654/ (stored 0%)
  adding: content/roberta_gpt_out/roberta_custom/runs/Feb15_16-41-51_b7fb20e65b38/1676479330.978654/events.out.tfevents.1676479330.b7fb20e65b38.18052.1 (deflated 63%)
  adding: content/roberta_gpt_out/roberta_custom/runs/Feb15_16-41-51_b7fb20e65b38/events.out.tfevents.1676479330.b7fb20e65b38.18052.0 (deflated 58%)
  adding: content/roberta_gpt_out/roberta_custom/runs/Feb15_16-41-51_b7fb20e65b38/events.out.tfevents.1676479421.b7fb20e65b38.18052.2 (deflated 28%)
  adding: content/roberta_gpt_out/roberta_custom/config.json (deflated 55%)
  adding: content/roberta_gpt_out/roberta_custom/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/ (stored 0%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/trainer_state.json (deflated 46%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/tokenizer.json (deflated 72%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/special_tokens_map.json (deflated 52%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/config.json (deflated 55%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/scheduler.pt (deflated 49%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/merges.txt (deflated 53%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/optimizer.pt (deflated 23%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/tokenizer_config.json (deflated 48%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/pytorch_model.bin (deflated 17%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/rng_state.pth (deflated 28%)
  adding: content/roberta_gpt_out/roberta_custom/checkpoint-500/training_args.bin (deflated 49%)
  adding: content/roberta_gpt_out/roberta_custom/vocab.json (deflated 59%)
  adding: content/roberta_gpt_out/roberta_custom/tokenizer_config.json (deflated 48%)
  adding: content/roberta_gpt_out/roberta_custom/predict_results_None.txt (deflated 62%)
  adding: content/roberta_gpt_out/roberta_custom/eval_results.json (deflated 42%)
  adding: content/roberta_gpt_out/roberta_custom/pytorch_model.bin (deflated 17%)
  adding: content/roberta_gpt_out/roberta_custom/training_args.bin (deflated 49%)
!zip -r /content/training_cache_t5_large.zip /content/t5_large_cache
!zip -r /content/t5_large_out.zip /content/t5_large_out
  adding: content/t5_large_cache/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/t5_large_cache_t5_v1_1_json_default-5ed1982b30e343c8_0.0.0_0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51.lock (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/generation_config.json (deflated 27%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/special_tokens_map.json (deflated 83%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/config.json (deflated 43%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/tokenizer_config.json (deflated 82%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/pytorch_model.bin (deflated 53%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/snapshots/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/spiece.model (deflated 48%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/329243624cf70001991b9f0410d222a618bd33188eadc9890259b60cbc78f944 (deflated 53%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/4e28ff6ebdf584f5372d9de68867399142435d9a (deflated 48%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/d52815623b46b7db1c4b957b5a83a8ad30b0146a (deflated 27%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/b7a0675689c9da2e2e4a6ab3ada2886fa907d56f (deflated 43%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/b114c318caf72f6e89ea92e0755c41327a453198 (deflated 82%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/blobs/881bdbffc06e471924ecea57f962bc5f8e2a9f21 (deflated 83%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/refs/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/refs/main (deflated 3%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/.no_exist/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/.no_exist/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/.no_exist/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/tokenizer.json (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/models--google--t5-v1_1-large/.no_exist/a98b0fcd0b8137ded40cdf0c0cf0ee884e7c9726/added_tokens.json (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/json/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51.incomplete_info.lock (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/ (stored 0%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-train.arrow (deflated 62%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-3ade70e6a4c3e151.arrow (deflated 74%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-85561eabafb886e8.arrow (deflated 74%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-test.arrow (deflated 62%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/json-validation.arrow (deflated 62%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-2d36083b35a3e0a9.arrow (deflated 74%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json (deflated 58%)
  adding: content/t5_large_cache/t5_v1_1/json/default-5ed1982b30e343c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51_builder.lock (stored 0%)
  adding: content/t5_large_out/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/generation_config.json (deflated 29%)
  adding: content/t5_large_out/t5_v1_1/train_results.json (deflated 40%)
  adding: content/t5_large_out/t5_v1_1/trainer_state.json (deflated 82%)
  adding: content/t5_large_out/t5_v1_1/predict_results.json (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/tokenizer.json (deflated 75%)
  adding: content/t5_large_out/t5_v1_1/generated_predictions.txt (deflated 82%)
  adding: content/t5_large_out/t5_v1_1/README.md (deflated 57%)
  adding: content/t5_large_out/t5_v1_1/all_results.json (deflated 65%)
  adding: content/t5_large_out/t5_v1_1/special_tokens_map.json (deflated 86%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/generation_config.json (deflated 29%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/trainer_state.json (deflated 82%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/tokenizer.json (deflated 75%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/special_tokens_map.json (deflated 86%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/config.json (deflated 47%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/scheduler.pt (deflated 50%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/optimizer.pt (deflated 12%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/tokenizer_config.json (deflated 80%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/pytorch_model.bin (deflated 26%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/rng_state.pth (deflated 28%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/spiece.model (deflated 48%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2500/training_args.bin (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/generation_config.json (deflated 29%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/trainer_state.json (deflated 76%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/tokenizer.json (deflated 75%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/special_tokens_map.json (deflated 86%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/config.json (deflated 47%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/scheduler.pt (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/optimizer.pt (deflated 12%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/tokenizer_config.json (deflated 80%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/pytorch_model.bin (deflated 26%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/rng_state.pth (deflated 28%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/spiece.model (deflated 48%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1000/training_args.bin (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/generation_config.json (deflated 29%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/trainer_state.json (deflated 79%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/tokenizer.json (deflated 75%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/special_tokens_map.json (deflated 86%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/config.json (deflated 47%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/scheduler.pt (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/optimizer.pt (deflated 12%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/tokenizer_config.json (deflated 80%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/pytorch_model.bin (deflated 26%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/rng_state.pth (deflated 28%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/spiece.model (deflated 48%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-1500/training_args.bin (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/runs/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/runs/Feb15_17-36-52_b7fb20e65b38/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/runs/Feb15_17-36-52_b7fb20e65b38/1676482651.1311803/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/runs/Feb15_17-36-52_b7fb20e65b38/1676482651.1311803/events.out.tfevents.1676482651.b7fb20e65b38.31799.1 (deflated 63%)
  adding: content/t5_large_out/t5_v1_1/runs/Feb15_17-36-52_b7fb20e65b38/events.out.tfevents.1676485964.b7fb20e65b38.31799.2 (deflated 34%)
  adding: content/t5_large_out/t5_v1_1/runs/Feb15_17-36-52_b7fb20e65b38/events.out.tfevents.1676482651.b7fb20e65b38.31799.0 (deflated 64%)
  adding: content/t5_large_out/t5_v1_1/config.json (deflated 47%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/generation_config.json (deflated 29%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/trainer_state.json (deflated 81%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/tokenizer.json (deflated 75%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/special_tokens_map.json (deflated 86%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/config.json (deflated 47%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/scheduler.pt (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/optimizer.pt (deflated 12%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/tokenizer_config.json (deflated 80%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/pytorch_model.bin (deflated 26%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/rng_state.pth (deflated 28%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/spiece.model (deflated 48%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-2000/training_args.bin (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/ (stored 0%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/generation_config.json (deflated 29%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/trainer_state.json (deflated 68%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/tokenizer.json (deflated 75%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/special_tokens_map.json (deflated 86%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/config.json (deflated 47%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/scheduler.pt (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/optimizer.pt (deflated 12%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/tokenizer_config.json (deflated 80%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/pytorch_model.bin (deflated 26%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/rng_state.pth (deflated 28%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/spiece.model (deflated 48%)
  adding: content/t5_large_out/t5_v1_1/checkpoint-500/training_args.bin (deflated 49%)
  adding: content/t5_large_out/t5_v1_1/tokenizer_config.json (deflated 80%)
  adding: content/t5_large_out/t5_v1_1/eval_results.json (deflated 45%)
  adding: content/t5_large_out/t5_v1_1/pytorch_model.bin (deflated 26%)
  adding: content/t5_large_out/t5_v1_1/spiece.model (deflated 48%)
  adding: content/t5_large_out/t5_v1_1/training_args.bin (deflated 49%)
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive