2022-04-29 01:06:09 +02:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"![Logo 1](https://git.wmi.amu.edu.pl/AITech/Szablon/raw/branch/master/Logotyp_AITech1.jpg)\n",
"<div class=\"alert alert-block alert-info\">\n",
"<h1> Systemy Dialogowe </h1>\n",
"<h2> 8. <i>Parsing semantyczny z wykorzystaniem technik uczenia maszynowego</i> [laboratoria]</h2> \n",
"<h3> Marek Kubis (2021)</h3>\n",
"</div>\n",
"\n",
"![Logo 2](https://git.wmi.amu.edu.pl/AITech/Szablon/raw/branch/master/Logotyp_AITech2.jpg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Parsing semantyczny z wykorzystaniem technik uczenia maszynowego\n",
"================================================================\n",
"\n",
"Wprowadzenie\n",
"------------\n",
"Problem wykrywania slotów i ich wartości w wypowiedziach użytkownika można sformułować jako zadanie\n",
"polegające na przewidywaniu dla poszczególnych słów etykiet wskazujących na to czy i do jakiego\n",
"slotu dane słowo należy.\n",
"\n",
"> chciałbym zarezerwować stolik na jutro**/day** na godzinę dwunastą**/hour** czterdzieści**/hour** pięć**/hour** na pięć**/size** osób\n",
"\n",
"Granice slotów oznacza się korzystając z wybranego schematu etykietowania.\n",
"\n",
"### Schemat IOB\n",
"\n",
"| Prefix | Znaczenie |\n",
"|:------:|:---------------------------|\n",
"| I | wnętrze slotu (inside) |\n",
"| O | poza slotem (outside) |\n",
"| B | początek slotu (beginning) |\n",
"\n",
"> chciałbym zarezerwować stolik na jutro**/B-day** na godzinę dwunastą**/B-hour** czterdzieści**/I-hour** pięć**/I-hour** na pięć**/B-size** osób\n",
"\n",
"### Schemat IOBES\n",
"\n",
"| Prefix | Znaczenie |\n",
"|:------:|:---------------------------|\n",
"| I | wnętrze slotu (inside) |\n",
"| O | poza slotem (outside) |\n",
"| B | początek slotu (beginning) |\n",
"| E | koniec slotu (ending) |\n",
"| S | pojedyncze słowo (single) |\n",
"\n",
"> chciałbym zarezerwować stolik na jutro**/S-day** na godzinę dwunastą**/B-hour** czterdzieści**/I-hour** pięć**/E-hour** na pięć**/S-size** osób\n",
"\n",
"Jeżeli dla tak sformułowanego zadania przygotujemy zbiór danych\n",
"złożony z wypowiedzi użytkownika z oznaczonymi slotami (tzw. *zbiór uczący*),\n",
"to możemy zastosować techniki (nadzorowanego) uczenia maszynowego w celu zbudowania modelu\n",
"annotującego wypowiedzi użytkownika etykietami slotów.\n",
"\n",
"Do zbudowania takiego modelu można wykorzystać między innymi:\n",
"\n",
" 1. warunkowe pola losowe (Lafferty i in.; 2001),\n",
"\n",
" 2. rekurencyjne sieci neuronowe, np. sieci LSTM (Hochreiter i Schmidhuber; 1997),\n",
"\n",
" 3. transformery (Vaswani i in., 2017).\n",
"\n",
"Przykład\n",
"--------\n",
"Skorzystamy ze zbioru danych przygotowanego przez Schustera (2019)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Zbiór ten gromadzi wypowiedzi w trzech językach opisane slotami dla dwunastu ram należących do trzech dziedzin `Alarm`, `Reminder` oraz `Weather`. Dane wczytamy korzystając z biblioteki [conllu](https://pypi.org/project/conllu/)."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 1,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2022-05-01 17:33:11 +02:00
"# text: halo\t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
2022-05-01 17:33:11 +02:00
"# intent: hello\t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
2022-05-01 17:33:11 +02:00
"# slots: \t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
"1\thalo\thello\tNoLabel\n",
"\n",
2022-05-01 17:33:11 +02:00
"\t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
2022-05-01 17:33:11 +02:00
"# text: chaciałbym pójść na premierę filmu jakie premiery są w tym tygodniu\t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
2022-05-01 17:33:11 +02:00
"# intent: reqmore\t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
2022-05-01 17:33:11 +02:00
"# slots: \t\t\t\n",
2022-04-29 01:06:09 +02:00
"\n",
"1\tchaciałbym\treqmore\tNoLabel\n",
"\n",
"2\tpójść\treqmore\tNoLabel\n",
"\n",
"3\tna\treqmore\tNoLabel\n",
"\n",
"4\tpremierę\treqmore\tNoLabel\n",
"\n",
"5\tfilmu\treqmore\tNoLabel\n",
"\n",
2022-05-01 17:33:11 +02:00
"6\tjakie\treqmore\tB-goal\n",
2022-04-29 01:06:09 +02:00
"\n",
2022-05-01 17:33:11 +02:00
"7\tpremiery\treqmore\tI-goal\n",
2022-04-29 01:06:09 +02:00
"\n"
]
}
],
"source": [
"from conllu import parse_incr\n",
"fields = ['id', 'form', 'frame', 'slot']\n",
"\n",
"def nolabel2o(line, i):\n",
" return 'O' if line[i] == 'NoLabel' else line[i]\n",
"# pathTrain = '../tasks/zad8/en/train-en.conllu'\n",
"# pathTest = '../tasks/zad8/en/test-en.conllu'\n",
"\n",
"pathTrain = '../tasks/zad8/pl/train.conllu'\n",
"pathTest = '../tasks/zad8/pl/test.conllu'\n",
"\n",
"with open(pathTrain, encoding=\"UTF-8\") as trainfile:\n",
" i=0\n",
" for line in trainfile:\n",
" print(line)\n",
" i+=1\n",
" if i==15: break \n",
" trainset = list(parse_incr(trainfile, fields=fields, field_parsers={'slot': nolabel2o}))\n",
"with open(pathTest, encoding=\"UTF-8\") as testfile:\n",
" testset = list(parse_incr(testfile, fields=fields, field_parsers={'slot': nolabel2o}))\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Zobaczmy kilka przykładowych wypowiedzi z tego zbioru."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 2,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<table>\n",
"<tbody>\n",
"<tr><td style=\"text-align: right;\">1</td><td>wybieram</td><td>inform</td><td>O </td></tr>\n",
"<tr><td style=\"text-align: right;\">2</td><td>batmana </td><td>inform</td><td>B-title</td></tr>\n",
"</tbody>\n",
"</table>"
],
"text/plain": [
"'<table>\\n<tbody>\\n<tr><td style=\"text-align: right;\">1</td><td>wybieram</td><td>inform</td><td>O </td></tr>\\n<tr><td style=\"text-align: right;\">2</td><td>batmana </td><td>inform</td><td>B-title</td></tr>\\n</tbody>\\n</table>'"
]
},
2022-05-23 07:53:43 +02:00
"execution_count": 2,
2022-04-29 01:06:09 +02:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from tabulate import tabulate\n",
"tabulate(trainset[1], tablefmt='html')"
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 3,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<table>\n",
"<tbody>\n",
2022-05-01 17:33:11 +02:00
"<tr><td style=\"text-align: right;\">1</td><td>chcę </td><td>inform</td><td>O </td></tr>\n",
"<tr><td style=\"text-align: right;\">2</td><td>zarezerwować</td><td>inform</td><td>B-goal</td></tr>\n",
"<tr><td style=\"text-align: right;\">3</td><td>bilety </td><td>inform</td><td>O </td></tr>\n",
2022-04-29 01:06:09 +02:00
"</tbody>\n",
"</table>"
],
"text/plain": [
2022-05-01 17:33:11 +02:00
"'<table>\\n<tbody>\\n<tr><td style=\"text-align: right;\">1</td><td>chcę </td><td>inform</td><td>O </td></tr>\\n<tr><td style=\"text-align: right;\">2</td><td>zarezerwować</td><td>inform</td><td>B-goal</td></tr>\\n<tr><td style=\"text-align: right;\">3</td><td>bilety </td><td>inform</td><td>O </td></tr>\\n</tbody>\\n</table>'"
2022-04-29 01:06:09 +02:00
]
},
2022-05-23 07:53:43 +02:00
"execution_count": 3,
2022-04-29 01:06:09 +02:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tabulate(trainset[16], tablefmt='html')"
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 4,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<table>\n",
"<tbody>\n",
"<tr><td style=\"text-align: right;\">1</td><td>chciałbym </td><td>inform</td><td>O</td></tr>\n",
"<tr><td style=\"text-align: right;\">2</td><td>anulować </td><td>inform</td><td>O</td></tr>\n",
"<tr><td style=\"text-align: right;\">3</td><td>rezerwację</td><td>inform</td><td>O</td></tr>\n",
"<tr><td style=\"text-align: right;\">4</td><td>biletu </td><td>inform</td><td>O</td></tr>\n",
"</tbody>\n",
"</table>"
],
"text/plain": [
"'<table>\\n<tbody>\\n<tr><td style=\"text-align: right;\">1</td><td>chciałbym </td><td>inform</td><td>O</td></tr>\\n<tr><td style=\"text-align: right;\">2</td><td>anulować </td><td>inform</td><td>O</td></tr>\\n<tr><td style=\"text-align: right;\">3</td><td>rezerwację</td><td>inform</td><td>O</td></tr>\\n<tr><td style=\"text-align: right;\">4</td><td>biletu </td><td>inform</td><td>O</td></tr>\\n</tbody>\\n</table>'"
]
},
2022-05-23 07:53:43 +02:00
"execution_count": 4,
2022-04-29 01:06:09 +02:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tabulate(trainset[20], tablefmt='html')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Budując model skorzystamy z architektury opartej o rekurencyjne sieci neuronowe\n",
"zaimplementowanej w bibliotece [flair](https://github.com/flairNLP/flair) (Akbik i in. 2018)."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 5,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [],
"source": [
"from flair.data import Corpus, Sentence, Token\n",
"from flair.datasets import SentenceDataset\n",
"from flair.embeddings import StackedEmbeddings\n",
"from flair.embeddings import WordEmbeddings\n",
"from flair.embeddings import CharacterEmbeddings\n",
"from flair.embeddings import FlairEmbeddings\n",
"from flair.models import SequenceTagger\n",
"from flair.trainers import ModelTrainer\n",
2022-05-05 19:58:00 +02:00
"from flair.datasets import DataLoader\n",
2022-04-29 01:06:09 +02:00
"\n",
"# determinizacja obliczeń\n",
"import random\n",
"import torch\n",
"random.seed(42)\n",
"torch.manual_seed(42)\n",
"\n",
"if torch.cuda.is_available():\n",
" torch.cuda.manual_seed(0)\n",
" torch.cuda.manual_seed_all(0)\n",
" torch.backends.cudnn.enabled = False\n",
" torch.backends.cudnn.benchmark = False\n",
" torch.backends.cudnn.deterministic = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Dane skonwertujemy do formatu wykorzystywanego przez `flair`, korzystając z następującej funkcji."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 6,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2022-05-23 07:53:43 +02:00
"Corpus: 346 train + 38 dev + 32 test sentences\n",
"Dictionary with 78 tags: <unk>, O, O/reqmore, B-interval/reqmore, I-interval/reqmore, O/inform, B-title/inform, B-date/inform, I-date/inform, B-time/inform, B-quantity/inform, B-area/inform, I-area/inform, B-goal/inform, O/bye, O/hello, O/reqmore inform, B-goal/reqmore inform, I-goal/reqmore inform, B-date/reqmore inform, B-interval/reqmore inform, O/null, O/help, B-goal/reqmore, I-goal/reqmore, B-title/reqmore, B-title/reqmore inform, I-title/reqmore inform, O/ack, O/reqalts\n"
2022-04-29 01:06:09 +02:00
]
}
],
"source": [
2022-05-23 07:53:43 +02:00
"def conllu2flair(sentences, label1=None, label2=None):\n",
2022-04-29 01:06:09 +02:00
" fsentences = []\n",
"\n",
" for sentence in sentences:\n",
" fsentence = Sentence()\n",
"\n",
" for token in sentence:\n",
" ftoken = Token(token['form'])\n",
"\n",
2022-05-23 07:53:43 +02:00
" if label1:\n",
" if label2:\n",
" ftoken.add_tag(label1, token[label1] + \"/\" + token[label2])\n",
" else:\n",
" ftoken.add_tag(label1, token[label1])\n",
" \n",
2022-04-29 01:06:09 +02:00
" fsentence.add_token(ftoken)\n",
"\n",
" fsentences.append(fsentence)\n",
"\n",
" return SentenceDataset(fsentences)\n",
"\n",
2022-05-23 07:53:43 +02:00
"corpus = Corpus(train=conllu2flair(trainset, 'slot', \"frame\"), test=conllu2flair(testset, 'slot', \"frame\"))\n",
2022-04-29 01:06:09 +02:00
"print(corpus)\n",
"tag_dictionary = corpus.make_tag_dictionary(tag_type='slot')\n",
"print(tag_dictionary)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Nasz model będzie wykorzystywał wektorowe reprezentacje słów (zob. [Word Embeddings](https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_3_WORD_EMBEDDING.md))."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 7,
2022-04-29 01:06:09 +02:00
"metadata": {},
2022-05-01 17:33:11 +02:00
"outputs": [],
2022-04-29 01:06:09 +02:00
"source": [
"embedding_types = [\n",
" WordEmbeddings('pl'),\n",
" FlairEmbeddings('polish-forward'),\n",
" FlairEmbeddings('polish-backward'),\n",
" CharacterEmbeddings(),\n",
"]\n",
"\n",
"embeddings = StackedEmbeddings(embeddings=embedding_types)\n",
"tagger = SequenceTagger(hidden_size=256, embeddings=embeddings,\n",
" tag_dictionary=tag_dictionary,\n",
" tag_type='slot', use_crf=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Zobaczmy jak wygląda architektura sieci neuronowej, która będzie odpowiedzialna za przewidywanie\n",
"slotów w wypowiedziach."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 8,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SequenceTagger(\n",
" (embeddings): StackedEmbeddings(\n",
" (list_embedding_0): WordEmbeddings('pl')\n",
" (list_embedding_1): FlairEmbeddings(\n",
" (lm): LanguageModel(\n",
" (drop): Dropout(p=0.25, inplace=False)\n",
" (encoder): Embedding(1602, 100)\n",
" (rnn): LSTM(100, 2048)\n",
" (decoder): Linear(in_features=2048, out_features=1602, bias=True)\n",
" )\n",
" )\n",
" (list_embedding_2): FlairEmbeddings(\n",
" (lm): LanguageModel(\n",
" (drop): Dropout(p=0.25, inplace=False)\n",
" (encoder): Embedding(1602, 100)\n",
" (rnn): LSTM(100, 2048)\n",
" (decoder): Linear(in_features=2048, out_features=1602, bias=True)\n",
" )\n",
" )\n",
" (list_embedding_3): CharacterEmbeddings(\n",
" (char_embedding): Embedding(275, 25)\n",
" (char_rnn): LSTM(25, 25, bidirectional=True)\n",
" )\n",
" )\n",
" (word_dropout): WordDropout(p=0.05)\n",
" (locked_dropout): LockedDropout(p=0.5)\n",
" (embedding2nn): Linear(in_features=4446, out_features=4446, bias=True)\n",
" (rnn): LSTM(4446, 256, batch_first=True, bidirectional=True)\n",
2022-05-23 07:53:43 +02:00
" (linear): Linear(in_features=512, out_features=78, bias=True)\n",
2022-04-29 01:06:09 +02:00
" (beta): 1.0\n",
" (weights): None\n",
" (weight_tensor) None\n",
")\n"
]
}
],
"source": [
"print(tagger)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Wykonamy dziesięć iteracji (epok) uczenia a wynikowy model zapiszemy w katalogu `slot-model`."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 9,
2022-04-29 01:06:09 +02:00
"metadata": {},
2022-05-05 19:58:00 +02:00
"outputs": [],
2022-04-29 01:06:09 +02:00
"source": [
2022-05-05 19:58:00 +02:00
"modelPath = 'slot-model/final-model.pt'\n",
"\n",
"from os.path import exists\n",
"\n",
"fileExists = exists(modelPath)\n",
"\n",
"if(not fileExists):\n",
" trainer = ModelTrainer(tagger, corpus)\n",
" trainer.train('slot-model',\n",
" learning_rate=0.1,\n",
" mini_batch_size=32,\n",
" max_epochs=10,\n",
" train_with_dev=False)"
2022-04-29 01:06:09 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Jakość wyuczonego modelu możemy ocenić, korzystając z zaraportowanych powyżej metryk, tj.:\n",
"\n",
" - *tp (true positives)*\n",
"\n",
" > liczba słów oznaczonych w zbiorze testowym etykietą $e$, które model oznaczył tą etykietą\n",
"\n",
" - *fp (false positives)*\n",
"\n",
" > liczba słów nieoznaczonych w zbiorze testowym etykietą $e$, które model oznaczył tą etykietą\n",
"\n",
" - *fn (false negatives)*\n",
"\n",
" > liczba słów oznaczonych w zbiorze testowym etykietą $e$, którym model nie nadał etykiety $e$\n",
"\n",
" - *precision*\n",
"\n",
" > $$\\frac{tp}{tp + fp}$$\n",
"\n",
" - *recall*\n",
"\n",
" > $$\\frac{tp}{tp + fn}$$\n",
"\n",
" - $F_1$\n",
"\n",
" > $$\\frac{2 \\cdot precision \\cdot recall}{precision + recall}$$\n",
"\n",
" - *micro* $F_1$\n",
"\n",
" > $F_1$ w którym $tp$, $fp$ i $fn$ są liczone łącznie dla wszystkich etykiet, tj. $tp = \\sum_{e}{{tp}_e}$, $fn = \\sum_{e}{{fn}_e}$, $fp = \\sum_{e}{{fp}_e}$\n",
"\n",
" - *macro* $F_1$\n",
"\n",
" > średnia arytmetyczna z $F_1$ obliczonych dla poszczególnych etykiet z osobna.\n",
"\n",
"Wyuczony model możemy wczytać z pliku korzystając z metody `load`."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 10,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2022-05-23 07:53:43 +02:00
"2022-05-22 15:25:19,970 loading file slot-model/final-model.pt\n"
2022-04-29 01:06:09 +02:00
]
}
],
"source": [
2022-05-05 19:58:00 +02:00
"model = SequenceTagger.load(modelPath)"
2022-04-29 01:06:09 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Wczytany model możemy wykorzystać do przewidywania slotów w wypowiedziach użytkownika, korzystając\n",
"z przedstawionej poniżej funkcji `predict`."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 11,
2022-04-29 01:06:09 +02:00
"metadata": {},
2022-05-01 17:33:11 +02:00
"outputs": [
{
"data": {
"text/plain": [
2022-05-23 07:53:43 +02:00
"[('co', 'O/reqmore'), ('gracie', 'O/reqmore'), ('obecnie', 'O/reqmore')]"
2022-05-01 17:33:11 +02:00
]
},
2022-05-23 07:53:43 +02:00
"execution_count": 11,
2022-05-01 17:33:11 +02:00
"metadata": {},
"output_type": "execute_result"
}
],
2022-04-29 01:06:09 +02:00
"source": [
"def predict(model, sentence):\n",
" csentence = [{'form': word} for word in sentence]\n",
" fsentence = conllu2flair([csentence])[0]\n",
" model.predict(fsentence)\n",
2022-05-01 17:33:11 +02:00
" return [(token, ftoken.get_tag('slot').value) for token, ftoken in zip(sentence, fsentence)]\n",
"\n",
2022-05-23 07:53:43 +02:00
"predict(model, 'co gracie obecnie'.split())"
2022-04-29 01:06:09 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Jak pokazuje przykład poniżej model wyuczony tylko na 100 przykładach popełnia w dosyć prostej\n",
"wypowiedzi błąd etykietując słowo `alarm` tagiem `B-weather/noun`."
]
},
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 12,
2022-04-29 01:06:09 +02:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<table>\n",
"<tbody>\n",
2022-05-23 07:53:43 +02:00
"<tr><td>kiedy </td><td>O/reqmore</td></tr>\n",
"<tr><td>gracie</td><td>O/reqmore</td></tr>\n",
"<tr><td>film </td><td>O/reqmore</td></tr>\n",
"<tr><td>zorro </td><td>O/reqmore</td></tr>\n",
2022-04-29 01:06:09 +02:00
"</tbody>\n",
"</table>"
],
"text/plain": [
2022-05-23 07:53:43 +02:00
"'<table>\\n<tbody>\\n<tr><td>kiedy </td><td>O/reqmore</td></tr>\\n<tr><td>gracie</td><td>O/reqmore</td></tr>\\n<tr><td>film </td><td>O/reqmore</td></tr>\\n<tr><td>zorro </td><td>O/reqmore</td></tr>\\n</tbody>\\n</table>'"
2022-04-29 01:06:09 +02:00
]
},
2022-05-23 07:53:43 +02:00
"execution_count": 12,
2022-04-29 01:06:09 +02:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
2022-05-01 17:33:11 +02:00
"tabulate(predict(model, 'kiedy gracie film zorro'.split()), tablefmt='html')"
2022-04-29 01:06:09 +02:00
]
},
2022-05-05 19:58:00 +02:00
{
"cell_type": "code",
2022-05-23 07:53:43 +02:00
"execution_count": 2,
2022-05-05 19:58:00 +02:00
"metadata": {},
"outputs": [
{
2022-05-23 07:53:43 +02:00
"ename": "NameError",
"evalue": "name 'testset' is not defined",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32mc:\\Develop\\wmi\\AITECH\\sem1\\Systemy dialogowe\\lab\\08-parsing-semantyczny-uczenie(zmodyfikowany).ipynb Cell 25'\u001b[0m in \u001b[0;36m<cell line: 40>\u001b[1;34m()\u001b[0m\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=36'>37</a>\u001b[0m \u001b[39mprint\u001b[39m(\u001b[39m\"\u001b[39m\u001b[39mrecall: \u001b[39m\u001b[39m\"\u001b[39m, recallScore)\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=37'>38</a>\u001b[0m \u001b[39mprint\u001b[39m(\u001b[39m\"\u001b[39m\u001b[39mf1: \u001b[39m\u001b[39m\"\u001b[39m, f1Score)\n\u001b[1;32m---> <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=39'>40</a>\u001b[0m \u001b[39meval\u001b[39;49m()\n",
"\u001b[1;32mc:\\Develop\\wmi\\AITECH\\sem1\\Systemy dialogowe\\lab\\08-parsing-semantyczny-uczenie(zmodyfikowany).ipynb Cell 25'\u001b[0m in \u001b[0;36meval\u001b[1;34m()\u001b[0m\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=13'>14</a>\u001b[0m fp \u001b[39m=\u001b[39m \u001b[39m0\u001b[39m\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=14'>15</a>\u001b[0m fn \u001b[39m=\u001b[39m \u001b[39m0\u001b[39m\n\u001b[1;32m---> <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=15'>16</a>\u001b[0m sentences \u001b[39m=\u001b[39m [sentence \u001b[39mfor\u001b[39;00m sentence \u001b[39min\u001b[39;00m testset]\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=16'>17</a>\u001b[0m \u001b[39mfor\u001b[39;00m sentence \u001b[39min\u001b[39;00m sentences:\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=17'>18</a>\u001b[0m \u001b[39m# get sentence as terms list\u001b[39;00m\n\u001b[0;32m <a href='vscode-notebook-cell:/c%3A/Develop/wmi/AITECH/sem1/Systemy%20dialogowe/lab/08-parsing-semantyczny-uczenie%28zmodyfikowany%29.ipynb#ch0000024?line=18'>19</a>\u001b[0m termsList \u001b[39m=\u001b[39m [w[\u001b[39m\"\u001b[39m\u001b[39mform\u001b[39m\u001b[39m\"\u001b[39m] \u001b[39mfor\u001b[39;00m w \u001b[39min\u001b[39;00m sentence]\n",
"\u001b[1;31mNameError\u001b[0m: name 'testset' is not defined"
2022-05-05 19:58:00 +02:00
]
}
],
"source": [
"# evaluation\n",
"\n",
"def precision(tpScore, fpScore):\n",
" return float(tpScore) / (tpScore + fpScore)\n",
"\n",
"def recall(tpScore, fnScore):\n",
" return float(tpScore) / (tpScore + fnScore)\n",
"\n",
"def f1(precision, recall):\n",
" return 2 * precision * recall/(precision + recall)\n",
"\n",
"def eval():\n",
" tp = 0\n",
" fp = 0\n",
" fn = 0\n",
" sentences = [sentence for sentence in testset]\n",
" for sentence in sentences:\n",
" # get sentence as terms list\n",
" termsList = [w[\"form\"] for w in sentence]\n",
" # predict tags\n",
" predTags = [tag[1] for tag in predict(model, termsList)]\n",
" \n",
2022-05-23 07:53:43 +02:00
" expTags = [token[\"slot\"] + \"/\" + token[\"frame\"] for token in sentence]\n",
2022-05-05 19:58:00 +02:00
" for i in range(len(predTags)):\n",
2022-05-23 07:53:43 +02:00
" if (expTags[i][0] == \"O\" and expTags[i] != predTags[i]):\n",
2022-05-05 19:58:00 +02:00
" fp += 1\n",
2022-05-23 07:53:43 +02:00
" elif ((expTags[i][0] != \"O\") & (predTags[i][0] == \"O\")):\n",
2022-05-05 19:58:00 +02:00
" fn += 1\n",
2022-05-23 07:53:43 +02:00
" elif ((expTags[i][0] != \"O\") & (predTags[i] == expTags[i])):\n",
2022-05-05 19:58:00 +02:00
" tp += 1\n",
"\n",
" precisionScore = precision(tp, fp)\n",
" recallScore = recall(tp, fn)\n",
" f1Score = f1(precisionScore, recallScore)\n",
" print(\"stats: \")\n",
" print(\"precision: \", precisionScore)\n",
" print(\"recall: \", recallScore)\n",
" print(\"f1: \", f1Score)\n",
"\n",
"eval()\n",
"\n",
" "
]
},
2022-04-29 01:06:09 +02:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Literatura\n",
"----------\n",
" 1. Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis, Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. NAACL-HLT (1) 2019, pp. 3795-3805\n",
" 2. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML '01). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 282– 289, https://repository.upenn.edu/cgi/viewcontent.cgi?article=1162&context=cis_papers\n",
" 3. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput. 9, 8 (November 15, 1997), 1735– 1780, https://doi.org/10.1162/neco.1997.9.8.1735\n",
" 4. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is All you Need, NIPS 2017, pp. 5998-6008, https://arxiv.org/abs/1706.03762\n",
" 5. Alan Akbik, Duncan Blythe, Roland Vollgraf, Contextual String Embeddings for Sequence Labeling, Proceedings of the 27th International Conference on Computational Linguistics, pp. 1638– 1649, https://www.aclweb.org/anthology/C18-1139.pdf\n"
]
}
],
"metadata": {
"author": "Marek Kubis",
"email": "mkubis@amu.edu.pl",
"interpreter": {
2022-05-23 07:53:43 +02:00
"hash": "2f9d6cf1e3d8195079a65c851de355134a77367bcd714b1a5d498c42d3c07114"
2022-04-29 01:06:09 +02:00
},
"jupytext": {
"cell_metadata_filter": "-all",
"main_language": "python",
"notebook_metadata_filter": "-all"
},
"kernelspec": {
2022-05-23 07:53:43 +02:00
"display_name": "Python 3.8.3 64-bit",
2022-04-29 01:06:09 +02:00
"language": "python",
"name": "python3"
},
"lang": "pl",
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.3"
},
"subtitle": "8.Parsing semantyczny z wykorzystaniem technik uczenia maszynowego[laboratoria]",
"title": "Systemy Dialogowe",
"year": "2021"
},
"nbformat": 4,
"nbformat_minor": 4
}