lab 10
This commit is contained in:
parent
1d80fbebf5
commit
b8e4adfe93
489
lab/10-zarzadzanie-dialogiem-uczenie.ipynb
Normal file
489
lab/10-zarzadzanie-dialogiem-uczenie.ipynb
Normal file
@ -0,0 +1,489 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"collapsed": false
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"![Logo 1](https://git.wmi.amu.edu.pl/AITech/Szablon/raw/branch/master/Logotyp_AITech1.jpg)\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<h1> Systemy Dialogowe </h1>\n",
|
||||||
|
"<h2> 10. <i>Zarz\u0105dzanie dialogiem z wykorzystaniem technik uczenia maszynowego</i> [laboratoria]</h2> \n",
|
||||||
|
"<h3> Marek Kubis (2021)</h3>\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"![Logo 2](https://git.wmi.amu.edu.pl/AITech/Szablon/raw/branch/master/Logotyp_AITech2.jpg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Zarz\u0105dzanie dialogiem z wykorzystaniem technik uczenia maszynowego\n",
|
||||||
|
"==================================================================\n",
|
||||||
|
"\n",
|
||||||
|
"Uczenie przez wzmacnianie\n",
|
||||||
|
"-------------------------\n",
|
||||||
|
"\n",
|
||||||
|
"Zamiast r\u0119cznie implementowa\u0107 zbi\u00f3r regu\u0142 odpowiedzialnych za wyznaczenie akcji, kt\u00f3r\u0105 powinien podj\u0105\u0107 agent b\u0119d\u0105c w danym stanie, odpowiedni\u0105 taktyk\u0119 prowadzenia dialogu mo\u017cna zbudowa\u0107, wykorzystuj\u0105c techniki uczenia maszynowego.\n",
|
||||||
|
"\n",
|
||||||
|
"Obok metod uczenia nadzorowanego, kt\u00f3re wykorzystali\u015bmy do zbudowania modelu NLU, do konstruowania taktyk\n",
|
||||||
|
"prowadzenia dialogu wykorzystuje si\u0119 r\u00f3wnie\u017c *uczenie przez wzmacnianie* (ang. *reinforcement learning*).\n",
|
||||||
|
"\n",
|
||||||
|
"W tym uj\u0119ciu szuka\u0107 b\u0119dziemy funkcji $Q*: S \\times A \\to R$, kt\u00f3ra dla stanu dialogu $s \\in S$ oraz aktu\n",
|
||||||
|
"dialogowego $a \\in A$ zwraca nagrod\u0119 (ang. *reward*) $r \\in R$, tj. warto\u015b\u0107 rzeczywist\u0105 pozwalaj\u0105c\u0105 oceni\u0107 na ile\n",
|
||||||
|
"podj\u0119cie akcji $a$ w stanie $s$ jest korzystne.\n",
|
||||||
|
"\n",
|
||||||
|
"Za\u0142o\u017cymy r\u00f3wnie\u017c, \u017ce poszukiwana funkcja powinna maksymalizowa\u0107 *zwrot* (ang. *return*), tj.\n",
|
||||||
|
"skumulowan\u0105 nagrod\u0119 w toku prowadzonego dialogu, czyli dla tury $t_0$ cel uczenia powinien mie\u0107 posta\u0107:\n",
|
||||||
|
"\n",
|
||||||
|
"$$ \\sum_{t=t_0}^{\\infty}{\\gamma^{t-1}r_t} $$\n",
|
||||||
|
"\n",
|
||||||
|
"gdzie:\n",
|
||||||
|
"\n",
|
||||||
|
" - $t$: tura agenta,\n",
|
||||||
|
"\n",
|
||||||
|
" - $r_t$: nagroda w turze $t$,\n",
|
||||||
|
"\n",
|
||||||
|
" - $\\gamma \\in [0, 1]$: wsp\u00f3\u0142czynnik dyskontowy (w przypadku agent\u00f3w dialogowych bli\u017cej $1$ ni\u017c $0$, por. np. Rieser i Lemon (2011))."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Agent dialogowy w procesie uczenia przez wzmacnianie wchodzi w interakcj\u0119 ze *\u015brodowiskiem*, kt\u00f3re\n",
|
||||||
|
"dla akcji podejmowanej przez taktyk\u0119 prowadzenia dialogu zwraca kolejny stan oraz nagrod\u0119 powi\u0105zan\u0105 z\n",
|
||||||
|
"wykonaniem tej akcji w bie\u017c\u0105cym stanie.\n",
|
||||||
|
"\n",
|
||||||
|
"Spos\u00f3b w jaki informacje pochodz\u0105ce ze \u015brodowiska s\u0105 wykorzystywane do znalezienia funkcji $Q*$\n",
|
||||||
|
"zale\u017cy od wybranej metody uczenia.\n",
|
||||||
|
"W przyk\u0142adzie przestawionym poni\u017cej skorzystamy z algorytmu $DQN$ (Mnih i in., 2013) co oznacza, \u017ce:\n",
|
||||||
|
"\n",
|
||||||
|
" 1. b\u0119dziemy aproksymowa\u0107 funkcj\u0119 $Q*$ sieci\u0105 neuronow\u0105,\n",
|
||||||
|
"\n",
|
||||||
|
" 2. wagi sieci b\u0119dziemy wyznacza\u0107 korzystaj\u0105c z metody spadku gradientu.\n",
|
||||||
|
"\n",
|
||||||
|
"Przyk\u0142ad\n",
|
||||||
|
"--------\n",
|
||||||
|
"\n",
|
||||||
|
"Taktyk\u0119 prowadzenia dialogu osadzimy w \u015brodowisku *ConvLab-2*."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from convlab2.dialog_agent.agent import PipelineAgent\n",
|
||||||
|
"from convlab2.dialog_agent.env import Environment\n",
|
||||||
|
"from convlab2.dst.rule.multiwoz import RuleDST\n",
|
||||||
|
"from convlab2.policy.rule.multiwoz import RulePolicy\n",
|
||||||
|
"from convlab2.policy.dqn import DQN\n",
|
||||||
|
"from convlab2.policy.rlmodule import Memory\n",
|
||||||
|
"from convlab2.evaluator.multiwoz_eval import MultiWozEvaluator\n",
|
||||||
|
"import logging\n",
|
||||||
|
"\n",
|
||||||
|
"logging.disable(logging.DEBUG)\n",
|
||||||
|
"\n",
|
||||||
|
"# determinizacja oblicze\u0144\n",
|
||||||
|
"import random\n",
|
||||||
|
"import torch\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"\n",
|
||||||
|
"np.random.seed(123)\n",
|
||||||
|
"random.seed(123)\n",
|
||||||
|
"torch.manual_seed(123)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"\u015arodowisko, z kt\u00f3rym agent b\u0119dzie wchodzi\u0142 w interakcje zawiera\u0107 b\u0119dzie\n",
|
||||||
|
"symulator u\u017cytkownika wykorzystuj\u0105cy taktyk\u0119 prowadzenia dialogu zbudowan\u0105 z wykorzystaniem regu\u0142."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"usr_policy = RulePolicy(character='usr')\n",
|
||||||
|
"usr_simulator = PipelineAgent(None, None, usr_policy, None, 'user') # type: ignore\n",
|
||||||
|
"\n",
|
||||||
|
"dst = RuleDST()\n",
|
||||||
|
"evaluator = MultiWozEvaluator()\n",
|
||||||
|
"env = Environment(None, usr_simulator, None, dst, evaluator)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Zobaczmy jak w *ConvLab-2* zdefiniowana jest nagroda w klasie `Environment`"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%script false --no-raise-error\n",
|
||||||
|
"#\n",
|
||||||
|
"# plik convlab2/dialog_agent/env.py\n",
|
||||||
|
"#\n",
|
||||||
|
"class Environment():\n",
|
||||||
|
"\n",
|
||||||
|
" # (...)\n",
|
||||||
|
"\n",
|
||||||
|
" def step(self, action):\n",
|
||||||
|
"\n",
|
||||||
|
" # (...)\n",
|
||||||
|
"\n",
|
||||||
|
" if self.evaluator:\n",
|
||||||
|
" if self.evaluator.task_success():\n",
|
||||||
|
" reward = 40\n",
|
||||||
|
" elif self.evaluator.cur_domain and self.evaluator.domain_success(self.evaluator.cur_domain):\n",
|
||||||
|
" reward = 5\n",
|
||||||
|
" else:\n",
|
||||||
|
" reward = -1\n",
|
||||||
|
" else:\n",
|
||||||
|
" reward = self.usr.get_reward()\n",
|
||||||
|
" terminated = self.usr.is_terminated()\n",
|
||||||
|
"\n",
|
||||||
|
" return state, reward, terminated\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Jak mo\u017cna zauwa\u017cy\u0107 powy\u017cej akcja, kt\u00f3ra prowadzi do pomy\u015blnego zako\u0144czenia zadania uzyskuje nagrod\u0119 $40$,\n",
|
||||||
|
"akcja kt\u00f3ra prowadzi do prawid\u0142owego rozpoznania dziedziny uzyskuje nagrod\u0119 $5$,\n",
|
||||||
|
"natomiast ka\u017cda inna akcja uzyskuje \"kar\u0119\" $-1$. Taka definicja zwrotu premiuje kr\u00f3tkie dialogi\n",
|
||||||
|
"prowadz\u0105ce do pomy\u015blnego wykonania zadania.\n",
|
||||||
|
"\n",
|
||||||
|
"Sie\u0107 neuronowa, kt\u00f3r\u0105 wykorzystamy do aproksymacji funkcji $Q*$ ma nast\u0119puj\u0105c\u0105 architektur\u0119"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%script false --no-raise-error\n",
|
||||||
|
"#\n",
|
||||||
|
"# plik convlab2/policy/rlmodule.py\n",
|
||||||
|
"# klasa EpsilonGreedyPolicy wykorzystywana w DQN\n",
|
||||||
|
"#\n",
|
||||||
|
"class EpsilonGreedyPolicy(nn.Module):\n",
|
||||||
|
" def __init__(self, s_dim, h_dim, a_dim, epsilon_spec={'start': 0.1, 'end': 0.0, 'end_epoch': 200}):\n",
|
||||||
|
" super(EpsilonGreedyPolicy, self).__init__()\n",
|
||||||
|
"\n",
|
||||||
|
" self.net = nn.Sequential(nn.Linear(s_dim, h_dim),\n",
|
||||||
|
" nn.ReLU(),\n",
|
||||||
|
" nn.Linear(h_dim, h_dim),\n",
|
||||||
|
" nn.ReLU(),\n",
|
||||||
|
" nn.Linear(h_dim, a_dim))\n",
|
||||||
|
" # (...)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"policy = DQN(is_train=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Ka\u017cdy krok procedury uczenia sk\u0142ada si\u0119 z dw\u00f3ch etap\u00f3w:\n",
|
||||||
|
"\n",
|
||||||
|
" 1. Wygenerowania przy u\u017cyciu taktyki (metoda `policy.predict`) oraz \u015brodowiska (metoda `env.step`) *trajektorii*, tj. sekwencji przej\u015b\u0107 pomi\u0119dzy stanami z\u0142o\u017conych z krotek postaci:\n",
|
||||||
|
" - stanu \u017ar\u00f3d\u0142owego,\n",
|
||||||
|
" - podj\u0119tej akcji (aktu systemowego),\n",
|
||||||
|
" - nagrody,\n",
|
||||||
|
" - stanu docelowego,\n",
|
||||||
|
" - znacznika ko\u0144ca dialogu."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# por. ConvLab-2/convlab2/policy/dqn/train.py\n",
|
||||||
|
"def sample(env, policy, batch_size, warm_up):\n",
|
||||||
|
" buff = Memory()\n",
|
||||||
|
" sampled_num = 0\n",
|
||||||
|
" max_trajectory_len = 50\n",
|
||||||
|
"\n",
|
||||||
|
" while sampled_num < batch_size:\n",
|
||||||
|
" # rozpocz\u0119cie nowego dialogu\n",
|
||||||
|
" s = env.reset()\n",
|
||||||
|
"\n",
|
||||||
|
" for t in range(max_trajectory_len):\n",
|
||||||
|
" try:\n",
|
||||||
|
" # podj\u0119cie akcji przez agenta dialogowego\n",
|
||||||
|
" a = policy.predict(s, warm_up=warm_up)\n",
|
||||||
|
"\n",
|
||||||
|
" # odpowied\u017a \u015brodowiska na podj\u0119t\u0105 akcje\n",
|
||||||
|
" next_s, r, done = env.step(a)\n",
|
||||||
|
"\n",
|
||||||
|
" # dodanie krotki do zbioru danych\n",
|
||||||
|
" buff.push(torch.Tensor(policy.vector.state_vectorize(s)).numpy(), # stan \u017ar\u00f3d\u0142owy\n",
|
||||||
|
" policy.vector.action_vectorize(a), # akcja\n",
|
||||||
|
" r, # nagroda\n",
|
||||||
|
" torch.Tensor(policy.vector.state_vectorize(next_s)).numpy(), # stan docelowy\n",
|
||||||
|
" 0 if done else 1) # znacznik ko\u0144ca\n",
|
||||||
|
"\n",
|
||||||
|
" s = next_s\n",
|
||||||
|
"\n",
|
||||||
|
" if done:\n",
|
||||||
|
" break\n",
|
||||||
|
" except:\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
" sampled_num += t\n",
|
||||||
|
"\n",
|
||||||
|
" return buff"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
" 2. Wykorzystania wygenerowanych krotek do aktualizacji taktyki.\n",
|
||||||
|
"\n",
|
||||||
|
"Funkcja `train` realizuj\u0105ca pojedynczy krok uczenia przez wzmacnianie ma nast\u0119puj\u0105c\u0105 posta\u0107"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def train(env, policy, batch_size, epoch, warm_up):\n",
|
||||||
|
" print(f'epoch: {epoch}')\n",
|
||||||
|
" buff = sample(env, policy, batch_size, warm_up)\n",
|
||||||
|
" policy.update_memory(buff)\n",
|
||||||
|
" policy.update(epoch)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Metoda `update` klasy `DQN` wykorzystywana do aktualizacji wag ma nast\u0119puj\u0105c\u0105 posta\u0107"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%script false --no-raise-error\n",
|
||||||
|
"#\n",
|
||||||
|
"# plik convlab2/policy/dqn/dqn.py\n",
|
||||||
|
"# klasa DQN\n",
|
||||||
|
"#\n",
|
||||||
|
"class DQN(Policy):\n",
|
||||||
|
" # (...)\n",
|
||||||
|
" def update(self, epoch):\n",
|
||||||
|
" total_loss = 0.\n",
|
||||||
|
" for i in range(self.training_iter):\n",
|
||||||
|
" round_loss = 0.\n",
|
||||||
|
" # 1. batch a sample from memory\n",
|
||||||
|
" batch = self.memory.get_batch(batch_size=self.batch_size)\n",
|
||||||
|
"\n",
|
||||||
|
" for _ in range(self.training_batch_iter):\n",
|
||||||
|
" # 2. calculate the Q loss\n",
|
||||||
|
" loss = self.calc_q_loss(batch)\n",
|
||||||
|
"\n",
|
||||||
|
" # 3. make a optimization step\n",
|
||||||
|
" self.net_optim.zero_grad()\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" self.net_optim.step()\n",
|
||||||
|
"\n",
|
||||||
|
" round_loss += loss.item()\n",
|
||||||
|
"\n",
|
||||||
|
" # (...)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przebieg procesu uczenia zilustrujemy wykonuj\u0105c 100 iteracji. W ka\u017cdej iteracji ograniczymy si\u0119 do 100 przyk\u0142ad\u00f3w."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"epoch = 100\n",
|
||||||
|
"batch_size = 100\n",
|
||||||
|
"\n",
|
||||||
|
"train(env, policy, batch_size, 0, warm_up=True)\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(1, epoch):\n",
|
||||||
|
" train(env, policy, batch_size, i, warm_up=False)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Sprawd\u017amy jakie akty systemowe zwraca taktyka `DQN` w odpowiedzi na zmieniaj\u0105cy si\u0119 stan dialogu."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"lines_to_next_cell": 0
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from convlab2.dialog_agent import PipelineAgent\n",
|
||||||
|
"dst.init_session()\n",
|
||||||
|
"agent = PipelineAgent(nlu=None, dst=dst, policy=policy, nlg=None, name='sys')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"lines_to_next_cell": 0
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"agent.response([['Inform', 'Hotel', 'Price', 'cheap'], ['Inform', 'Hotel', 'Parking', 'yes']])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"lines_to_next_cell": 0
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"agent.response([['Inform', 'Hotel', 'Area', 'north']])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"lines_to_next_cell": 0
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"agent.response([['Request', 'Hotel', 'Area', '?']])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"agent.response([['Inform', 'Hotel', 'Day', 'tuesday'], ['Inform', 'Hotel', 'People', '2'], ['Inform', 'Hotel', 'Stay', '4']])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Jako\u015b\u0107 wyuczonego modelu mo\u017cemy oceni\u0107 mierz\u0105c tzw. wska\u017anik sukcesu (ang. *task success rate*),\n",
|
||||||
|
"tj. stosunek liczby dialog\u00f3w zako\u0144czonych powodzeniem do liczby wszystkich dialog\u00f3w."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from convlab2.dialog_agent.session import BiSession\n",
|
||||||
|
"\n",
|
||||||
|
"sess = BiSession(agent, usr_simulator, None, evaluator)\n",
|
||||||
|
"dialog_num = 100\n",
|
||||||
|
"task_success_num = 0\n",
|
||||||
|
"max_turn_num = 50\n",
|
||||||
|
"\n",
|
||||||
|
"# por. ConvLab-2/convlab2/policy/evaluate.py\n",
|
||||||
|
"for dialog in range(dialog_num):\n",
|
||||||
|
" random.seed(dialog)\n",
|
||||||
|
" np.random.seed(dialog)\n",
|
||||||
|
" torch.manual_seed(dialog)\n",
|
||||||
|
" sess.init_session()\n",
|
||||||
|
" sys_act = []\n",
|
||||||
|
" task_success = 0\n",
|
||||||
|
"\n",
|
||||||
|
" for _ in range(max_turn_num):\n",
|
||||||
|
" sys_act, _, finished, _ = sess.next_turn(sys_act)\n",
|
||||||
|
"\n",
|
||||||
|
" if finished is True:\n",
|
||||||
|
" task_success = sess.evaluator.task_success()\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
" print(f'dialog: {dialog:02} success: {task_success}')\n",
|
||||||
|
" task_success_num += task_success\n",
|
||||||
|
"\n",
|
||||||
|
"print('')\n",
|
||||||
|
"print(f'task success rate: {task_success_num/dialog_num:.2f}')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"**Uwaga**: Chc\u0105c uzyska\u0107 taktyk\u0119 o skuteczno\u015bci por\u00f3wnywalnej z wynikami przedstawionymi na stronie\n",
|
||||||
|
"[ConvLab-2](https://github.com/thu-coai/ConvLab-2/blob/master/README.md) trzeba odpowiednio\n",
|
||||||
|
"zwi\u0119kszy\u0107 zar\u00f3wno liczb\u0119 iteracji jak i liczb\u0119 przyk\u0142ad\u00f3w generowanych w ka\u017cdym przyro\u015bcie.\n",
|
||||||
|
"W celu przy\u015bpieszenia procesu uczenia warto zr\u00f3wnolegli\u0107 obliczenia, jak pokazano w\n",
|
||||||
|
"skrypcie [train.py](https://github.com/thu-coai/ConvLab-2/blob/master/convlab2/policy/dqn/train.py).\n",
|
||||||
|
"\n",
|
||||||
|
"Literatura\n",
|
||||||
|
"----------\n",
|
||||||
|
" 1. Rieser, V., Lemon, O., (2011). Reinforcement learning for adaptive dialogue systems: a data-driven methodology for dialogue management and natural language generation. (Theory and Applications of Natural Language Processing). Springer. https://doi.org/10.1007/978-3-642-24942-6\n",
|
||||||
|
"\n",
|
||||||
|
" 2. Richard S. Sutton and Andrew G. Barto, (2018). Reinforcement Learning: An Introduction, Second Edition, MIT Press, Cambridge, MA http://incompleteideas.net/book/RLbook2020.pdf\n",
|
||||||
|
"\n",
|
||||||
|
" 3. Volodymyr Mnih and Koray Kavukcuoglu and David Silver and Alex Graves and Ioannis Antonoglou and Daan Wierstra and Martin Riedmiller, (2013). Playing Atari with Deep Reinforcement Learning, NIPS Deep Learning Workshop, https://arxiv.org/pdf/1312.5602.pdf"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"jupytext": {
|
||||||
|
"cell_metadata_filter": "-all",
|
||||||
|
"main_language": "python",
|
||||||
|
"notebook_metadata_filter": "-all"
|
||||||
|
},
|
||||||
|
"author": "Marek Kubis",
|
||||||
|
"email": "mkubis@amu.edu.pl",
|
||||||
|
"lang": "pl",
|
||||||
|
"subtitle": "10.Zarz\u0105dzanie dialogiem z wykorzystaniem technik uczenia maszynowego[laboratoria]",
|
||||||
|
"title": "Systemy Dialogowe",
|
||||||
|
"year": "2021"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 4
|
||||||
|
}
|
28
tasks/zad8/pl/intentClassification.py
Normal file
28
tasks/zad8/pl/intentClassification.py
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
import pandas as pd
|
||||||
|
|
||||||
|
def flatten(t):
|
||||||
|
return [item for sublist in t for item in sublist]
|
||||||
|
|
||||||
|
def getData():
|
||||||
|
Xdata = []
|
||||||
|
Ydata = []
|
||||||
|
pathOut = './tasks/zad8/pl/'
|
||||||
|
pathIn = "./data/clean/"
|
||||||
|
i = 0
|
||||||
|
j = 0
|
||||||
|
nr = 0
|
||||||
|
for i in range(16,20):
|
||||||
|
for j in range(20):
|
||||||
|
for nr in range(1,5):
|
||||||
|
fileName = pathIn + "dialog-" + str(i).zfill(2) + "-" + str(j).zfill(2) + "-" + str(nr).zfill(2) + ".tsv"
|
||||||
|
try:
|
||||||
|
df = pd.read_csv(fileName, sep='\t', header=None, encoding="utf-8")
|
||||||
|
Xdata.append(df[1].tolist())
|
||||||
|
Ydata.append(df[2].tolist())
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return flatten(Xdata), flatten(Ydata)
|
||||||
|
|
||||||
|
x,y = getData()
|
||||||
|
|
||||||
|
print(y)
|
Loading…
Reference in New Issue
Block a user