635 lines
18 KiB
Plaintext
635 lines
18 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"![Logo 1](https://git.wmi.amu.edu.pl/AITech/Szablon/raw/branch/master/Logotyp_AITech1.jpg)\n",
|
||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||
|
"<h1> Modelowanie języka</h1>\n",
|
||
|
"<h2> 09. <i>Zanurzenia słów (Word2vec)</i> [wykład]</h2> \n",
|
||
|
"<h3> Filip Graliński (2022)</h3>\n",
|
||
|
"</div>\n",
|
||
|
"\n",
|
||
|
"![Logo 2](https://git.wmi.amu.edu.pl/AITech/Szablon/raw/branch/master/Logotyp_AITech2.jpg)\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Zanurzenia słów (Word2vec)\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"W praktyce stosowalność słowosieci okazała się zaskakująco\n",
|
||
|
"ograniczona. Większy przełom w przetwarzaniu języka naturalnego przyniosły\n",
|
||
|
"wielowymiarowe reprezentacje słów, inaczej: zanurzenia słów.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### „Wymiary” słów\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Moglibyśmy zanurzyć (ang. *embed*) w wielowymiarowej przestrzeni, tzn. zdefiniować odwzorowanie\n",
|
||
|
"$E \\colon V \\rightarrow \\mathcal{R}^m$ dla pewnego $m$ i określić taki sposób estymowania\n",
|
||
|
"prawdopodobieństw $P(u|v)$, by dla par $E(v)$ i $E(v')$ oraz $E(u)$ i $E(u')$ znajdujących się w pobliżu\n",
|
||
|
"(według jakiejś metryki odległości, na przykład zwykłej odległości euklidesowej):\n",
|
||
|
"\n",
|
||
|
"$$P(u|v) \\approx P(u'|v').$$\n",
|
||
|
"\n",
|
||
|
"$E(u)$ nazywamy zanurzeniem (embeddingiem) słowa.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"#### Wymiary określone z góry?\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Można by sobie wyobrazić, że $m$ wymiarów mogłoby być z góry\n",
|
||
|
"określonych przez lingwistę. Wymiary te byłyby związane z typowymi\n",
|
||
|
"„osiami” rozpatrywanymi w językoznawstwie, na przykład:\n",
|
||
|
"\n",
|
||
|
"- czy słowo jest wulgarne, pospolite, potoczne, neutralne czy książkowe?\n",
|
||
|
"- czy słowo jest archaiczne, wychodzące z użycia czy jest neologizmem?\n",
|
||
|
"- czy słowo dotyczy kobiet, czy mężczyzn (w sensie rodzaju gramatycznego i/lub\n",
|
||
|
" socjolingwistycznym)?\n",
|
||
|
"- czy słowo jest w liczbie pojedynczej czy mnogiej?\n",
|
||
|
"- czy słowo jest rzeczownikiem czy czasownikiem?\n",
|
||
|
"- czy słowo jest rdzennym słowem czy zapożyczeniem?\n",
|
||
|
"- czy słowo jest nazwą czy słowem pospolitym?\n",
|
||
|
"- czy słowo opisuje konkretną rzecz czy pojęcie abstrakcyjne?\n",
|
||
|
"- …\n",
|
||
|
"\n",
|
||
|
"W praktyce okazało się jednak, że lepiej, żeby komputer uczył się sam\n",
|
||
|
"możliwych wymiarów — z góry określamy tylko $m$ (liczbę wymiarów).\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### Bigramowy model języka oparty na zanurzeniach\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Zbudujemy teraz najprostszy model język oparty na zanurzeniach. Będzie to właściwie najprostszy\n",
|
||
|
"**neuronowy model języka**, jako że zbudowany model można traktować jako prostą sieć neuronową.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"#### Słownik\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"W typowym neuronowym modelu języka rozmiar słownika musi być z góry\n",
|
||
|
"ograniczony. Zazwyczaj jest to liczba rzędu kilkudziesięciu wyrazów —\n",
|
||
|
"po prostu będziemy rozpatrywać $|V|$ najczęstszych wyrazów, pozostałe zamienimy\n",
|
||
|
"na specjalny token `<unk>` reprezentujący nieznany (*unknown*) wyraz.\n",
|
||
|
"\n",
|
||
|
"Aby utworzyć taki słownik, użyjemy gotowej klasy `Vocab` z pakietu torchtext:\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 4,
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from itertools import islice\n",
|
||
|
"import regex as re\n",
|
||
|
"import sys\n",
|
||
|
"from torchtext.vocab import build_vocab_from_iterator\n",
|
||
|
"import pickle\n",
|
||
|
"import lzma"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 2,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"1027"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 2,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from itertools import islice\n",
|
||
|
"import regex as re\n",
|
||
|
"import sys\n",
|
||
|
"from torchtext.vocab import build_vocab_from_iterator\n",
|
||
|
"import lzma\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def get_words_from_line(line):\n",
|
||
|
" line = line.rstrip()\n",
|
||
|
" yield '<s>'\n",
|
||
|
" for m in re.finditer(r'[\\p{L}0-9\\*]+|\\p{P}+', line):\n",
|
||
|
" yield m.group(0).lower()\n",
|
||
|
" yield '</s>'\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def get_word_lines_from_file(file_name):\n",
|
||
|
" with lzma.open(file_name, 'r') as fh:\n",
|
||
|
" for line in fh:\n",
|
||
|
" yield get_words_from_line(line.decode('utf-8'))\n",
|
||
|
"\n",
|
||
|
"vocab_size = 20000\n",
|
||
|
"\n",
|
||
|
"vocab = build_vocab_from_iterator(\n",
|
||
|
" get_word_lines_from_file('train/in.tsv.xz'),\n",
|
||
|
" max_tokens = vocab_size,\n",
|
||
|
" specials = ['<unk>'])\n",
|
||
|
"\n",
|
||
|
"vocab['human']"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 3,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"['<unk>', '\\\\', 'the', '-\\\\', 'nmighty']"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 3,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"vocab.lookup_tokens([0, 1, 2, 10, 12345])"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 5,
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"with open('vocabulary.pickle', 'wb') as fh:\n",
|
||
|
" pickle.dump(vocab, fh)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"#### Definicja sieci\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Naszą prostą sieć neuronową zaimplementujemy używając frameworku PyTorch.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 10,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stderr",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"/Users/jacob/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/container.py:217: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n",
|
||
|
" input = module(input)\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"tensor(2.9869e-05, grad_fn=<SelectBackward0>)"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 10,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from torch import nn\n",
|
||
|
"import torch\n",
|
||
|
"\n",
|
||
|
"embed_size = 100\n",
|
||
|
"\n",
|
||
|
"class SimpleBigramNeuralLanguageModel(nn.Module):\n",
|
||
|
" def __init__(self, vocabulary_size, embedding_size):\n",
|
||
|
" super(SimpleBigramNeuralLanguageModel, self).__init__()\n",
|
||
|
" self.model = nn.Sequential(\n",
|
||
|
" nn.Embedding(vocabulary_size, embedding_size),\n",
|
||
|
" nn.Linear(embedding_size, vocabulary_size),\n",
|
||
|
" nn.Softmax()\n",
|
||
|
" )\n",
|
||
|
"\n",
|
||
|
" def forward(self, x):\n",
|
||
|
" return self.model(x)\n",
|
||
|
"\n",
|
||
|
"model = SimpleBigramNeuralLanguageModel(vocab_size, embed_size)\n",
|
||
|
"\n",
|
||
|
"vocab.set_default_index(vocab['<unk>'])\n",
|
||
|
"ixs = torch.tensor(vocab.forward(['is']))\n",
|
||
|
"out = model(ixs)\n",
|
||
|
"out[0][vocab['the']]"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Teraz wyuczmy model. Wpierw tylko potasujmy nasz plik:\n",
|
||
|
"\n",
|
||
|
" shuf < opensubtitlesA.pl.txt > opensubtitlesA.pl.shuf.txt\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 8,
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from torch.utils.data import IterableDataset\n",
|
||
|
"import itertools\n",
|
||
|
"\n",
|
||
|
"def look_ahead_iterator(gen):\n",
|
||
|
" prev = None\n",
|
||
|
" for item in gen:\n",
|
||
|
" if prev is not None:\n",
|
||
|
" yield (prev, item)\n",
|
||
|
" prev = item\n",
|
||
|
"\n",
|
||
|
"class Bigrams(IterableDataset):\n",
|
||
|
" def __init__(self, text_file, vocabulary_size):\n",
|
||
|
" self.vocab = build_vocab_from_iterator(\n",
|
||
|
" get_word_lines_from_file(text_file),\n",
|
||
|
" max_tokens = vocabulary_size,\n",
|
||
|
" specials = ['<unk>'])\n",
|
||
|
" self.vocab.set_default_index(self.vocab['<unk>'])\n",
|
||
|
" self.vocabulary_size = vocabulary_size\n",
|
||
|
" self.text_file = text_file\n",
|
||
|
"\n",
|
||
|
" def __iter__(self):\n",
|
||
|
" return look_ahead_iterator(\n",
|
||
|
" (self.vocab[t] for t in itertools.chain.from_iterable(get_word_lines_from_file(self.text_file))))\n",
|
||
|
"\n",
|
||
|
"train_dataset = Bigrams('train/in.tsv.xz', vocab_size)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 9,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"(43, 0)"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 9,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from torch.utils.data import DataLoader\n",
|
||
|
"\n",
|
||
|
"next(iter(train_dataset))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"[tensor([ 2, 5, 51, 3481, 231]), tensor([ 5, 51, 3481, 231, 4])]"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from torch.utils.data import DataLoader\n",
|
||
|
"\n",
|
||
|
"next(iter(DataLoader(train_dataset, batch_size=5)))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"None"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"device = 'cpu'\n",
|
||
|
"model = SimpleBigramNeuralLanguageModel(vocab_size, embed_size).to(device)\n",
|
||
|
"data = DataLoader(train_dataset, batch_size=5000)\n",
|
||
|
"optimizer = torch.optim.Adam(model.parameters())\n",
|
||
|
"criterion = torch.nn.NLLLoss()\n",
|
||
|
"\n",
|
||
|
"model.train()\n",
|
||
|
"step = 0\n",
|
||
|
"for x, y in data:\n",
|
||
|
" x = x.to(device)\n",
|
||
|
" y = y.to(device)\n",
|
||
|
" optimizer.zero_grad()\n",
|
||
|
" ypredicted = model(x)\n",
|
||
|
" loss = criterion(torch.log(ypredicted), y)\n",
|
||
|
" if step % 100 == 0:\n",
|
||
|
" print(step, loss)\n",
|
||
|
" step += 1\n",
|
||
|
" loss.backward()\n",
|
||
|
" optimizer.step()\n",
|
||
|
"\n",
|
||
|
"torch.save(model.state_dict(), 'model1.bin')"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Policzmy najbardziej prawdopodobne kontynuacje dla zadanego słowa:\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"[('ciebie', 73, 0.1580502986907959), ('mnie', 26, 0.15395283699035645), ('<unk>', 0, 0.12862136960029602), ('nas', 83, 0.0410110242664814), ('niego', 172, 0.03281523287296295), ('niej', 245, 0.02104802615940571), ('siebie', 181, 0.020788608118891716), ('którego', 365, 0.019379809498786926), ('was', 162, 0.013852755539119244), ('wszystkich', 235, 0.01381855271756649)]"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"device = 'cuda'\n",
|
||
|
"model = SimpleBigramNeuralLanguageModel(vocab_size, embed_size).to(device)\n",
|
||
|
"model.load_state_dict(torch.load('model1.bin'))\n",
|
||
|
"model.eval()\n",
|
||
|
"\n",
|
||
|
"ixs = torch.tensor(vocab.forward(['dla'])).to(device)\n",
|
||
|
"\n",
|
||
|
"out = model(ixs)\n",
|
||
|
"top = torch.topk(out[0], 10)\n",
|
||
|
"top_indices = top.indices.tolist()\n",
|
||
|
"top_probs = top.values.tolist()\n",
|
||
|
"top_words = vocab.lookup_tokens(top_indices)\n",
|
||
|
"list(zip(top_words, top_indices, top_probs))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Teraz zbadajmy najbardziej podobne zanurzenia dla zadanego słowa:\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"[('.', 3, 0.404473215341568), (',', 4, 0.14222915470600128), ('z', 14, 0.10945753753185272), ('?', 6, 0.09583134204149246), ('w', 10, 0.050338443368673325), ('na', 12, 0.020703863352537155), ('i', 11, 0.016762692481279373), ('<unk>', 0, 0.014571071602404118), ('...', 15, 0.01453721895813942), ('</s>', 1, 0.011769450269639492)]"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"vocab = train_dataset.vocab\n",
|
||
|
"ixs = torch.tensor(vocab.forward(['kłopot'])).to(device)\n",
|
||
|
"\n",
|
||
|
"out = model(ixs)\n",
|
||
|
"top = torch.topk(out[0], 10)\n",
|
||
|
"top_indices = top.indices.tolist()\n",
|
||
|
"top_probs = top.values.tolist()\n",
|
||
|
"top_words = vocab.lookup_tokens(top_indices)\n",
|
||
|
"list(zip(top_words, top_indices, top_probs))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"[('poszedł', 1087, 1.0), ('idziesz', 1050, 0.4907470941543579), ('przyjeżdża', 4920, 0.45242372155189514), ('pojechałam', 12784, 0.4342481195926666), ('wrócił', 1023, 0.431664377450943), ('dobrać', 10351, 0.4312002956867218), ('stałeś', 5738, 0.4258835017681122), ('poszła', 1563, 0.41979148983955383), ('trafiłam', 18857, 0.4109022617340088), ('jedzie', 1674, 0.4091658890247345)]"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"cos = nn.CosineSimilarity(dim=1, eps=1e-6)\n",
|
||
|
"\n",
|
||
|
"embeddings = model.model[0].weight\n",
|
||
|
"\n",
|
||
|
"vec = embeddings[vocab['poszedł']]\n",
|
||
|
"\n",
|
||
|
"similarities = cos(vec, embeddings)\n",
|
||
|
"\n",
|
||
|
"top = torch.topk(similarities, 10)\n",
|
||
|
"\n",
|
||
|
"top_indices = top.indices.tolist()\n",
|
||
|
"top_probs = top.values.tolist()\n",
|
||
|
"top_words = vocab.lookup_tokens(top_indices)\n",
|
||
|
"list(zip(top_words, top_indices, top_probs))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"#### Zapis przy użyciu wzoru matematycznego\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Powyżej zaprogramowaną sieć neuronową można opisać następującym wzorem:\n",
|
||
|
"\n",
|
||
|
"$$\\vec{y} = \\operatorname{softmax}(CE(w_{i-1}),$$\n",
|
||
|
"\n",
|
||
|
"gdzie:\n",
|
||
|
"\n",
|
||
|
"- $w_{i-1}$ to pierwszy wyraz w bigramie (poprzedzający wyraz),\n",
|
||
|
"- $E(w)$ to zanurzenie (embedding) wyrazy $w$ — wektor o rozmiarze $m$,\n",
|
||
|
"- $C$ to macierz o rozmiarze $|V| \\times m$, która rzutuje wektor zanurzenia w wektor o rozmiarze słownika,\n",
|
||
|
"- $\\vec{y}$ to wyjściowy wektor prawdopodobieństw o rozmiarze $|V|$.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"##### Hiperparametry\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Zauważmy, że nasz model ma dwa hiperparametry:\n",
|
||
|
"\n",
|
||
|
"- $m$ — rozmiar zanurzenia,\n",
|
||
|
"- $|V|$ — rozmiar słownika, jeśli zakładamy, że możemy sterować\n",
|
||
|
" rozmiarem słownika (np. przez obcinanie słownika do zadanej liczby\n",
|
||
|
" najczęstszych wyrazów i zamiany pozostałych na specjalny token, powiedzmy, `<UNK>`.\n",
|
||
|
"\n",
|
||
|
"Oczywiście możemy próbować manipulować wartościami $m$ i $|V|$ w celu\n",
|
||
|
"polepszenia wyników naszego modelu.\n",
|
||
|
"\n",
|
||
|
"**Pytanie**: dlaczego nie ma sensu wartość $m \\approx |V|$ ? dlaczego nie ma sensu wartość $m = 1$?\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"#### Diagram sieci\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Jako że mnożenie przez macierz ($C$) oznacza po prostu zastosowanie\n",
|
||
|
"warstwy liniowej, naszą sieć możemy interpretować jako jednowarstwową\n",
|
||
|
"sieć neuronową, co można zilustrować za pomocą następującego diagramu:\n",
|
||
|
"\n",
|
||
|
"![img](./09_Zanurzenia_slow/bigram1.drawio.png \"Diagram prostego bigramowego neuronowego modelu języka\")\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"#### Zanurzenie jako mnożenie przez macierz\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Uzyskanie zanurzenia ($E(w)$) zazwyczaj realizowane jest na zasadzie\n",
|
||
|
"odpytania (<sub>look</sub>-up\\_). Co ciekawe, zanurzenie można intepretować jako\n",
|
||
|
"mnożenie przez macierz zanurzeń (embeddingów) $E$ o rozmiarze $m \\times |V|$ — jeśli słowo będziemy na wejściu kodowali przy użyciu\n",
|
||
|
"wektora z gorącą jedynką (<sub>one</sub>-hot encoding\\_), tzn. słowo $w$ zostanie\n",
|
||
|
"podane na wejściu jako wektor $\\vec{1_V}(w) = [0,\\ldots,0,1,0\\ldots,0]$ o rozmiarze $|V|$\n",
|
||
|
"złożony z samych zer z wyjątkiem jedynki na pozycji odpowiadającej indeksowi wyrazu $w$ w słowniku $V$.\n",
|
||
|
"\n",
|
||
|
"Wówczas wzór przyjmie postać:\n",
|
||
|
"\n",
|
||
|
"$$\\vec{y} = \\operatorname{softmax}(CE\\vec{1_V}(w_{i-1})),$$\n",
|
||
|
"\n",
|
||
|
"gdzie $E$ będzie tym razem macierzą $m \\times |V|$.\n",
|
||
|
"\n",
|
||
|
"**Pytanie**: czy $\\vec{1_V}(w)$ intepretujemy jako wektor wierszowy czy kolumnowy?\n",
|
||
|
"\n",
|
||
|
"W postaci diagramu można tę interpretację zilustrować w następujący sposób:\n",
|
||
|
"\n",
|
||
|
"![img](./09_Zanurzenia_slow/bigram2.drawio.png \"Diagram prostego bigramowego neuronowego modelu języka z wejściem w postaci one-hot\")\n",
|
||
|
"\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3 (ipykernel)",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.9.7"
|
||
|
},
|
||
|
"org": null
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 1
|
||
|
}
|