DL_RNN/RNN.ipynb
2024-05-25 19:27:27 +02:00

48 KiB

RNN

Installation of packages

%pip install torch
%pip install torchtext
%pip install datasets
%pip install pandas
%pip install scikit-learn
Requirement already satisfied: torch in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (2.3.0)
Requirement already satisfied: filelock in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (3.14.0)
Requirement already satisfied: typing-extensions>=4.8.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (4.10.0)
Requirement already satisfied: sympy in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (1.12)
Requirement already satisfied: networkx in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (3.2.1)
Requirement already satisfied: jinja2 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (3.1.3)
Requirement already satisfied: fsspec in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (2024.3.1)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch) (2021.4.0)
Requirement already satisfied: intel-openmp==2021.* in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch) (2021.4.0)
Requirement already satisfied: tbb==2021.* in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch) (2021.12.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from jinja2->torch) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from sympy->torch) (1.3.0)
Note: you may need to restart the kernel to use updated packages.
Requirement already satisfied: torchtext in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (0.18.0)
Requirement already satisfied: tqdm in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torchtext) (4.66.2)
Requirement already satisfied: requests in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torchtext) (2.31.0)
Requirement already satisfied: torch>=2.3.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torchtext) (2.3.0)
Requirement already satisfied: numpy in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torchtext) (1.26.3)
Requirement already satisfied: filelock in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (3.14.0)
Requirement already satisfied: typing-extensions>=4.8.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (4.10.0)
Requirement already satisfied: sympy in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (1.12)
Requirement already satisfied: networkx in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (3.2.1)
Requirement already satisfied: jinja2 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (3.1.3)
Requirement already satisfied: fsspec in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (2024.3.1)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from torch>=2.3.0->torchtext) (2021.4.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests->torchtext) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests->torchtext) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests->torchtext) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests->torchtext) (2024.2.2)
Requirement already satisfied: colorama in c:\users\skype\appdata\roaming\python\python312\site-packages (from tqdm->torchtext) (0.4.6)
Requirement already satisfied: intel-openmp==2021.* in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch>=2.3.0->torchtext) (2021.4.0)
Requirement already satisfied: tbb==2021.* in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch>=2.3.0->torchtext) (2021.12.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from jinja2->torch>=2.3.0->torchtext) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from sympy->torch>=2.3.0->torchtext) (1.3.0)
Note: you may need to restart the kernel to use updated packages.
Requirement already satisfied: datasets in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (2.19.1)
Requirement already satisfied: filelock in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (3.14.0)
Requirement already satisfied: numpy>=1.17 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (1.26.3)
Requirement already satisfied: pyarrow>=12.0.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (15.0.2)
Requirement already satisfied: pyarrow-hotfix in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (0.6)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (0.3.8)
Requirement already satisfied: pandas in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (2.2.1)
Requirement already satisfied: requests>=2.19.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (2.31.0)
Requirement already satisfied: tqdm>=4.62.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (4.66.2)
Requirement already satisfied: xxhash in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (3.4.1)
Requirement already satisfied: multiprocess in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (0.70.16)
Requirement already satisfied: fsspec<=2024.3.1,>=2023.1.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from fsspec[http]<=2024.3.1,>=2023.1.0->datasets) (2024.3.1)
Requirement already satisfied: aiohttp in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (3.9.5)
Requirement already satisfied: huggingface-hub>=0.21.2 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (0.23.1)
Requirement already satisfied: packaging in c:\users\skype\appdata\roaming\python\python312\site-packages (from datasets) (23.2)
Requirement already satisfied: pyyaml>=5.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from datasets) (6.0.1)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from aiohttp->datasets) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from aiohttp->datasets) (23.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from aiohttp->datasets) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from aiohttp->datasets) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from aiohttp->datasets) (1.9.4)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from huggingface-hub>=0.21.2->datasets) (4.10.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests>=2.19.0->datasets) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests>=2.19.0->datasets) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests>=2.19.0->datasets) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from requests>=2.19.0->datasets) (2024.2.2)
Requirement already satisfied: colorama in c:\users\skype\appdata\roaming\python\python312\site-packages (from tqdm>=4.62.1->datasets) (0.4.6)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\skype\appdata\roaming\python\python312\site-packages (from pandas->datasets) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from pandas->datasets) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from pandas->datasets) (2024.1)
Requirement already satisfied: six>=1.5 in c:\users\skype\appdata\roaming\python\python312\site-packages (from python-dateutil>=2.8.2->pandas->datasets) (1.16.0)
Note: you may need to restart the kernel to use updated packages.
Requirement already satisfied: pandas in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (2.2.1)
Requirement already satisfied: numpy<2,>=1.26.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from pandas) (1.26.3)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\skype\appdata\roaming\python\python312\site-packages (from pandas) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from pandas) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from pandas) (2024.1)
Requirement already satisfied: six>=1.5 in c:\users\skype\appdata\roaming\python\python312\site-packages (from python-dateutil>=2.8.2->pandas) (1.16.0)
Note: you may need to restart the kernel to use updated packages.
Requirement already satisfied: scikit-learn in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (1.4.1.post1)
Requirement already satisfied: numpy<2.0,>=1.19.5 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from scikit-learn) (1.26.3)
Requirement already satisfied: scipy>=1.6.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from scikit-learn) (1.12.0)
Requirement already satisfied: joblib>=1.2.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from scikit-learn) (1.3.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\skype\appdata\local\programs\python\python312\lib\site-packages (from scikit-learn) (3.3.0)
Note: you may need to restart the kernel to use updated packages.

Importing libraries

from collections import Counter
import torch
import pandas as pd
from torchtext.vocab import vocab
from sklearn.model_selection import train_test_split
from tqdm.notebook import tqdm

Read datasets

def read_data():
    train_dataset = pd.read_csv(
        "train/train.tsv.xz", compression="xz", sep="\t", names=["Label", "Text"]
    )
    dev_0_dataset = pd.read_csv("dev-0/in.tsv", sep="\t", names=["Text"])
    dev_0_labels = pd.read_csv("dev-0/expected.tsv", sep="\t", names=["Label"])
    test_A_dataset = pd.read_csv("test-A/in.tsv", sep="\t", names=["Text"])

    return train_dataset, dev_0_dataset, dev_0_labels, test_A_dataset
train_dataset, dev_0_dataset, dev_0_labels, test_A_dataset = read_data()

Split the training data into training and validation sets

train_texts, val_texts, train_labels, val_labels = train_test_split(
    train_dataset["Text"], train_dataset["Label"], test_size=0.1, random_state=42
)
train_dataset = pd.DataFrame({"Text": train_texts, "Label": train_labels})
val_dataset = pd.DataFrame({"Text": val_texts, "Label": val_labels})

Tokenize the text and labels

train_dataset["tokenized_text"] = train_dataset["Text"].apply(lambda x: x.split())
train_dataset["tokenized_labels"] = train_dataset["Label"].apply(lambda x: x.split())
val_dataset["tokenized_text"] = val_dataset["Text"].apply(lambda x: x.split())
val_dataset["tokenized_labels"] = val_dataset["Label"].apply(lambda x: x.split())
dev_0_dataset["tokenized_text"] = dev_0_dataset["Text"].apply(lambda x: x.split())
dev_0_dataset["tokenized_labels"] = dev_0_labels["Label"].apply(lambda x: x.split())
test_A_dataset["tokenized_text"] = test_A_dataset["Text"].apply(lambda x: x.split())

Create a vocab object which maps tokens to indices

def build_vocab(dataset):
    counter = Counter()
    for document in dataset:
        counter.update(document)
    return vocab(counter, specials=["<unk>", "<pad>", "<bos>", "<eos>"])
v = build_vocab(train_dataset["tokenized_text"])

Map indices to tokens

itos = v.get_itos()

Number of tokens in the vocabulary

len(itos)
22154

Index of the 'rejects' token

v["rejects"]
9086

Index of the '<unk>' token

v["<unk>"]
0

Set the default index to the unknown token

v.set_default_index(v["<unk>"])

Use cuda if available

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Vectorize the data

def data_process(dt):
    return [
        torch.tensor(
            [v["<bos>"]] + [v[token] for token in document] + [v["<eos>"]],
            dtype=torch.long,
            device=device,
        )
        for document in dt
    ]
train_tokens_ids = data_process(train_dataset["tokenized_text"])
val_tokens_ids = data_process(val_dataset["tokenized_text"])
dev_0_tokens_ids = data_process(dev_0_dataset["tokenized_text"])
test_A_tokens_ids = data_process(test_A_dataset["tokenized_text"])

Create a mapping from label to index

labels = ["O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC"]

label_to_index = {label: idx for idx, label in enumerate(labels)}

Vectorize the labels (NER)

def labels_process(dt, label_to_index):
    return [
        torch.tensor(
            [0] + [label_to_index[label] for label in document] + [0],
            dtype=torch.long,
            device=device,
        )
        for document in dt
    ]
train_labels = labels_process(train_dataset["tokenized_labels"], label_to_index)
val_labels = labels_process(val_dataset["tokenized_labels"], label_to_index)
dev_0_labels = labels_process(dev_0_dataset["tokenized_labels"], label_to_index)

Function for evaluation (returns precision, recall, and F1 score)

def get_scores(y_true, y_pred):
    acc_score = 0
    tp = 0
    fp = 0
    selected_items = 0
    relevant_items = 0

    for p, t in zip(y_pred, y_true):
        if p == t:
            acc_score += 1

        if p > 0 and p == t:
            tp += 1

        if p > 0:
            selected_items += 1

        if t > 0:
            relevant_items += 1

    if selected_items == 0:
        precision = 1.0
    else:
        precision = tp / selected_items

    if relevant_items == 0:
        recall = 1.0
    else:
        recall = tp / relevant_items

    if precision + recall == 0.0:
        f1 = 0.0
    else:
        f1 = 2 * precision * recall / (precision + recall)

    return precision, recall, f1

Calculate the number of unique tags

all_label_indices = [
    label_to_index[label]
    for document in train_dataset["tokenized_labels"]
    for label in document
]

num_tags = max(all_label_indices) + 1

print(num_tags)
9

Implementation of a recurrent neural network LSTM

class LSTM(torch.nn.Module):

    def __init__(self, vocab_size, embedding_dim, hidden_dim, num_layers, num_tags):
        super(LSTM, self).__init__()
        self.embedding = torch.nn.Embedding(vocab_size, embedding_dim)
        self.rec = torch.nn.LSTM(
            embedding_dim, hidden_dim, num_layers, batch_first=True, bidirectional=True
        )
        self.fc1 = torch.nn.Linear(hidden_dim * 2, num_tags)

    def forward(self, x):
        embedding = torch.relu(self.embedding(x))
        lstm_output, _ = self.rec(embedding)
        out_weights = self.fc1(lstm_output)
        return out_weights

Initialize the LSTM model

lstm = LSTM(len(v.get_itos()), 100, 100, 1, num_tags).to(device)

Define the loss function

criterion = torch.nn.CrossEntropyLoss()

Define the optimizer

optimizer = torch.optim.Adam(lstm.parameters())

Function for model evaluation

def eval_model(dataset_tokens, dataset_labels, model):
    Y_true = []
    Y_pred = []
    for i in tqdm(range(len(dataset_labels))):
        batch_tokens = dataset_tokens[i].unsqueeze(0)
        tags = list(dataset_labels[i].cpu().numpy())
        Y_true += tags

        Y_batch_pred_weights = model(batch_tokens).squeeze(0)
        Y_batch_pred = torch.argmax(Y_batch_pred_weights, 1)
        Y_pred += list(Y_batch_pred.cpu().numpy())

    return get_scores(Y_true, Y_pred)

Function for returning the predictions labels

def pred_labels(dataset_tokens, model, label_to_index):
    Y_pred = []
    inv_label_to_index = {
        v: k for k, v in label_to_index.items()
    }  # Create the inverse mapping

    dataset_tokens = dataset_tokens[1:-1]

    for i in tqdm(range(len(dataset_tokens))):
        batch_tokens = dataset_tokens[i].unsqueeze(0)
        Y_batch_pred_weights = model(batch_tokens).squeeze(0)
        Y_batch_pred = torch.argmax(Y_batch_pred_weights, 1)
        predicted_labels = [inv_label_to_index[label.item()] for label in Y_batch_pred]
        Y_pred.append(" ".join(predicted_labels))

    return Y_pred

Training

NUM_EPOCHS = 20
for i in range(NUM_EPOCHS):
    lstm.train()
    for i in tqdm(range(len(train_labels))):
        batch_tokens = train_tokens_ids[i].unsqueeze(0)
        tags = train_labels[i].unsqueeze(1)

        predicted_tags = lstm(batch_tokens)

        optimizer.zero_grad()
        loss = criterion(predicted_tags.squeeze(0), tags.squeeze(1))

        loss.backward()
        optimizer.step()

    lstm.eval()
    print(eval_model(val_tokens_ids, val_labels, lstm))
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.7132680320569902, 0.23037100949094047, 0.3482608695652174)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.7749537892791127, 0.4823123382226057, 0.5945754298883177)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.7864864864864864, 0.5858498705780846, 0.6715015658480303)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.7872340425531915, 0.6384814495254529, 0.7050976655550262)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.7830188679245284, 0.6683922922059247, 0.7211792086889061)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.7966499162479062, 0.6839229220592464, 0.7359950479727638)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8300609343263372, 0.7052056370434282, 0.7625563675944643)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8456536618754278, 0.7106701179177451, 0.7723081731520549)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8593695878074126, 0.7135461604831751, 0.7796983029541169)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8427757291317466, 0.723037100949094, 0.778328173374613)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8629512793550649, 0.7080816796088583, 0.7778830963665087)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.854050279329609, 0.7034800115041703, 0.7714871471376755)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8393868710429857, 0.724475122231809, 0.7777091694967583)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8705717292178183, 0.7138337647397182, 0.7844500632111252)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8473666554847367, 0.72648835202761, 0.7822855373180551)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8214173228346456, 0.7500719010641358, 0.7841250751653638)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8588936734017887, 0.7457578372159908, 0.7983374384236454)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8506047728015691, 0.7483462755248778, 0.7962056303549571)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8432908912830558, 0.7428817946505608, 0.789908256880734)
  0%|          | 0/850 [00:00<?, ?it/s]
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8448275862068966, 0.7469082542421628, 0.7928560525110669)
eval_model(val_tokens_ids, val_labels, lstm)
  0%|          | 0/95 [00:00<?, ?it/s]
(0.8448275862068966, 0.7469082542421628, 0.7928560525110669)
eval_model(dev_0_tokens_ids, dev_0_labels, lstm)
  0%|          | 0/215 [00:00<?, ?it/s]
(0.871974921630094, 0.8103006292239571, 0.8400072476897988)
dev_0_predictons = pred_labels(dev_0_tokens_ids, lstm, label_to_index)
dev_0_predictons = pd.DataFrame(dev_0_predictons, columns=["Label"])
dev_0_predictons.to_csv("dev-0/out.tsv", index=False, header=False)
  0%|          | 0/213 [00:00<?, ?it/s]
test_A_predictions = pred_labels(test_A_tokens_ids, lstm, label_to_index)
test_A_predictions = pd.DataFrame(test_A_predictions, columns=["Label"])
test_A_predictions.to_csv("test-A/out.tsv", index=False, header=False)
  0%|          | 0/228 [00:00<?, ?it/s]