2021-05-05 14:43:56 +02:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Uczenie maszynowe\n",
"# 4. Sieci neuronowe – wprowadzenie"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"# Przydatne importy\n",
"\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.1. Perceptron"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"https://www.youtube.com/watch?v=cNxadbrN_aI"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" heighth=\"100%\" src=\"http://m.natemat.pl/b94a41cd7322e1b8793e4644e5f82683,641,0,0,0.png\" alt=\"Frank Rosenblatt\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" src=\"http://m.natemat.pl/02943a7dc0f638d786b78cd5c9e75742,641,0,0,0.png\" heighth=\"100%\" alt=\"Frank Rosenblatt\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" heighth=\"100%\" src=\"https://upload.wikimedia.org/wikipedia/en/5/52/Mark_I_perceptron.jpeg\" alt=\"perceptron\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Pierwszy perceptron liniowy\n",
"\n",
"* Frank Rosenblatt, 1957\n",
"* aparat fotograficzny podłączony do 400 fotokomórek (rozdzielczość obrazu: 20 x 20)\n",
"* wagi – potencjometry aktualizowane za pomocą silniczków"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Uczenie perceptronu\n",
"\n",
"Cykl uczenia perceptronu Rosenblatta:\n",
"\n",
"1. Sfotografuj planszę z kolejnym obiektem.\n",
"1. Zaobserwuj, która lampka zapaliła się na wyjściu.\n",
"1. Sprawdź, czy to jest właściwa lampka.\n",
"1. Wyślij sygnał „nagrody” lub „kary”."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Funkcja aktywacji\n",
"\n",
"Funkcja bipolarna:\n",
"\n",
"$$ g(z) = \\left\\{ \n",
"\\begin{array}{rl}\n",
"1 & \\textrm{gdy $z > \\theta_0$} \\\\\n",
"-1 & \\textrm{wpp.}\n",
"\\end{array}\n",
"\\right. $$\n",
"\n",
"gdzie $z = \\theta_0x_0 + \\ldots + \\theta_nx_n$,<br/>\n",
"$\\theta_0$ to próg aktywacji,<br/>\n",
"$x_0 = 1$. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"def bipolar_plot():\n",
" matplotlib.rcParams.update({'font.size': 16})\n",
"\n",
" plt.figure(figsize=(8,5))\n",
" x = [-1,-.23,1] \n",
" y = [-1, -1, 1]\n",
" plt.ylim(-1.2,1.2)\n",
" plt.xlim(-1.2,1.2)\n",
" plt.plot([-2,2],[1,1], color='black', ls=\"dashed\")\n",
" plt.plot([-2,2],[-1,-1], color='black', ls=\"dashed\")\n",
" plt.step(x, y, lw=3)\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_color('none')\n",
" ax.spines['top'].set_color('none')\n",
" ax.xaxis.set_ticks_position('bottom')\n",
" ax.spines['bottom'].set_position(('data',0))\n",
" ax.yaxis.set_ticks_position('left')\n",
" ax.spines['left'].set_position(('data',0))\n",
"\n",
" plt.annotate(r'$\\theta_0$',\n",
" xy=(-.23,0), xycoords='data',\n",
" xytext=(-50, +50), textcoords='offset points', fontsize=26,\n",
" arrowprops=dict(arrowstyle=\"->\"))\n",
"\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcwAAAEeCAYAAAAHLSWiAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3de1yUZf7/8fcoZzEVT7mmwapYQsUqth5yKaxdUpHWFHTDxNrU1HbxlNpiWlp00KLdVkOtNA+PBH0Y5LElxbJsk3bZDAo1wbTNNHU7iKDA/fvDL/NrHA43h2EYeD0fj3nIXNd9DZ+5He4398x1X1gMwxAAAKhaC2cXAACAKyAwAQAwgcAEAMAEAhMAABMITAAATCAwAQAwwa2afq45AZwgIiJCu3btcnYZQHNlqaiRM0ygEfruu++cXQKAqxCYAACYQGACAGACgQkAgAkEJgAAJhCYAACYQGACAGACgQkAgAkEJgAAJhCYAACYQGACAGACgQkAgAkEJgAAJhCYAACYQGACAGACgQkAgAkEJlCFkydP6pFHHtHAgQPl4+Mji8WigoICU2PLysqUmJgof39/eXl56ZZbbtGWLVscWzAAhyEwgSocPXpUKSkpateunYYMGVKjsQsWLNCiRYs0ffp07dy5UwMGDNCYMWO0Y8cOB1ULwJEshmFU1V9lJ9DUlZWVqUWLK79Xrl69Wg899JDy8/Pl7+9f5bjTp0+rW7dumjdvnp544glr+9ChQ3XmzBl9+umnVY4PDQ1VVlZWnesHUCuWiho5wwSqUB6WNbV7925dunRJsbGxNu2xsbE6dOiQ8vPz66M8AA2IwAQcICcnR56enurZs6dNe1BQkCQpNzfXGWUBqIMq35K9/fbb7Trj4uIUFxen7777TqNHj7Yb8/DDDysmJkYnTpzQ+PHj7fpnzZqlyMhI5eXlafLkyXb9CQkJuvPOO5Wdna34+Hi7/qefflqDBg3Shx9+qMcee8yuPykpSSEhIcrIyNCSJUvs+pOTk9W7d2+9/fbbWrZsmV3/unXr1K1bN23atEkrVqyw69+8ebM6dOigNWvWaM2aNXb9O3bskI+Pj5YvX66UlBS7/szMTEnS0qVLtW3bNps+b29v7dy5U5K0ePFivfvuuzb97du3t04amT9/vg4cOGDTf91112n9+vWSpPj4eGVnZ9v0BwYGauXKlZKkSZMm6fDhwzb9ISEhSkpKknTlTOjkyZM2/QMHDlRiYqIk6d5779XZs2dt+ocOHaoFCxZIku6++25dvHjRpn/EiBGaPXu2JOn222/X1aKjozV16lQVFhZq2LBhdv11fe2FjJ2lf/zXXRculdr1NTbfrI1XlwlJzi4DaBAFzwyX1HiOe5mZmRW+JetWq2cHuKCdX1lUXNb4wxJA48SkHzQb/vO2O7sE0zjDRHNSfobZiHCGCZSrzQ9oTWbJvvHGG5owYYKOHDli8znmmjVrNHHiRB07dkwBAQGVjg/NWKisxncQAZo1Jv0ADhARESEPDw9t2LDBpn39+vUKDg6uMiwBNE6cYQLV2Lx5syTpk08+kSTt3LlTHTt2VMeOHRUWFiZJcnNz04QJE/Tqq69Kkjp16qQZM2YoMTFRrVu3Vt++fbVp0ybt2bNHaWlpznkiAOqEwASqMWbMGJv7U6dOlSSFhYVZZz2XlpaqtNR2QtFTTz0lX19fvfTSSzp16pR69+6tlJQURUZGNkjdAOoXgQlUo5qJcZVu07JlSyUkJCghIcERZQFoYHyGCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBCQCACQQmAAAmEJgAAJhAYAIAYAKBiTr73//+p8WLFyskJEStW7eWn5+fwsPDtWPHDmeXBgD1xs3ZBcC1ZWZm6g9/+IO++eYbm/a9e/cqMzNTK1as0OTJk51UHQDUH84wUWuZmZkaNmyYvvnmG8XGxurgwYM6f/68Pv74Yw0cOFCGYWjmzJk6efKks0sFgDojMFErp0+f1tixY3Xx4kU999xzWrdunUJDQ9W2bVv1799faWlp8vX1VWFhoTZu3OjscgGgzghM1Mrs2bP17bffasSIEZozZ45df8eOHTV48GBJ0r59+xq6PACodwQmauyLL77Qhg0bZLFY9Nxzz1W6XceOHSVJx48fb6jSAMBhCEzUWHJyssrKynTnnXfqxhtvrHS7y5cv2/wLAK6MwESNlJWV6c0335Qk3XfffVVue+7cOUmSt7e3w+tylBMnTmj06NFq06aNrrnmGo0aNUpfffWVqbEWi6XCW3Z2toOrBuAIXFaCGsnOztapU6ckSXFxcYqLi6t2TLdu3RxclWMUFhYqPDxcnp6eWrt2rSwWixISEnTHHXfo008/VatWrap9jLi4OLvLagIDAx1VMgAHIjBRI7WZwNOrVy8HVOJ4q1at0rFjx5SXl6eePXtKkm6++Wb16tVLycnJmjlzZrWP0bVrVw0YMMDRpQJoALwlixr517/+JUn63e9+p4sXL1Z627p1q3VM3759nVVunaSnp2vAgAHWsJSkgIAADR48WGlpaU6sDIAzEJiokcOHD0uSunfvLi8vr0pvH330kXXMb37zG5vH2LRpk/r37y9vb2+1b99e0dHROnbsWIM+DzNycnIUHBxs1x4UFKTc3FxTj7FixQp5enrKx8dH4eHhev/99+u7TAANhMBEjZSv2uPn51fldtu3b5ck3Xjjjerevbu1/ZVXXtHYsWPl7u6uF198UTNnztSePXs0cOBA05NpGsq5c+fUrl07u3Y/Pz+dP3++2vGxsbFavny5MjIytHLlSp09e1bh4eHKzMx0QLUAHI3PMFEjFy9elCR5eXlVus0XX3yhzz77TJI0fvx4a/u5c+f06KOPKiQkRPv27ZO7u7sk6e6771b//v312GOPaf369Q6svuYsFotdm2EYpsauW7fO+vWQIUMUFRWl4OBgJSQkaP/+/Xbbr1y5UitXrpQknTlzppYVA3AUzjBRIx4eHpKkCxcuVLrN8uXLJV25nOTns2jfeust/fjjj/rzn/9sDUvpymec4eHh2rJliwoLCx1TeC20a9fOemnMz50/f77CM8/qtG7dWsOHD9fBgwcr7J80aZKysrKUlZVlXfQBQONBYKJGrr/+eklSXl5ehf35+fnWs6Tp06erS5cu1r6PP/5YkqxL5v3cbbfdpqKiIuuZaWMQFBSknJwcu/bc3Fz16dOnVo9pGEaFZ60AGj8CEzUSFhYmSXrnnXf03//+16bvwoULiomJUXFxsQIDA7Vw4UKb/q+//lqSdN1119k9bnlbY/rLJiNHjtRHH31kMyGpoKBAH3zwgUaOHFnjx/vhhx+0fft2/frXv67PMgE0EAITNTJx4kS5ubmpuLhYI0eOtP5Jr3feeUeDBw/WwYMH5efnp9TUVLsL+8vfbvX09LR73PLVgBrTW7IPPfSQ/P39FRUVpbS0NKWnpysqKkrdunWzWYzg+PHjcnNz05NPPmltW7p0qR566CFt3LhRmZmZWrt2rQYPHqxTp05pyZIlzng6AOqIST+okRtvvFELFy7UggUL9Mknn+jWW2+16f/lL3+pLVu26Oabb7Yb6+PjI0kqLi62Wy6vqKjIZpvGoFWrVtqzZ49mzJih8ePHyzAMDR06VElJSfL19bVuZxiGSktLVVZWZm3r3bu3tm7dqq1bt+r777/XNddco8GDB+vVV1+122cAXAOBiRpLSEhQYGCgkpKSlJOTI8MwFBgYqJiYGE2bNq3S0OvataukK2+7Xr36z4kTJyRV/HatM3Xv3l1btmypcht/f3+7mbORkZGKjIx0ZGkAGhiBiVqJjo5WdHR0jcbceuutSk5O1oEDB+wC84MPPpCXl1e
"text/plain": [
"<Figure size 576x360 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"bipolar_plot()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Perceptron – schemat\n",
"\n",
"<img src=\"perceptron.png\" width=\"60%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Perceptron – zasada działania\n",
"\n",
"1. Ustal wartości początkowe $\\theta$ (wektor 0 lub liczby losowe blisko 0).\n",
"1. Dla każdego przykładu $(x^{(i)}, y^{(i)})$, dla $i=1,\\ldots,m$\n",
" * Oblicz wartość wyjścia $o^{(i)} = g(\\theta^{T}x^{(i)}) = g(\\sum_{j=0}^{n} \\theta_jx_j^{(i)})$\n",
" * Wykonaj aktualizację wag (tzw. *perceptron rule*):\n",
" $$ \\theta := \\theta + \\Delta \\theta $$\n",
" $$ \\Delta \\theta = \\alpha(y^{(i)}-o^{(i)})x^{(i)} $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"$$\\theta_j := \\theta_j + \\Delta \\theta_j $$\n",
"\n",
"Jeżeli przykład został sklasyfikowany **poprawnie**:\n",
"\n",
"* $y^{(i)}=1$ oraz $o^{(i)}=1$ : $$\\Delta\\theta_j = \\alpha(1 - 1)x_j^{(i)} = 0$$\n",
"* $y^{(i)}=-1$ oraz $o^{(i)}=-1$ : $$\\Delta\\theta_j = \\alpha(-1 - -1)x_j^{(i)} = 0$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Czyli: jeżeli trafiłeś, to nic nie zmieniaj."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"$$\\theta_j := \\theta_j + \\Delta \\theta_j $$\n",
"\n",
"Jeżeli przykład został sklasyfikowany **niepoprawnie**:\n",
"\n",
"* $y^{(i)}=1$ oraz $o^{(i)}=-1$ : $$\\Delta\\theta_j = \\alpha(1 - -1)x_j^{(i)} = 2 \\alpha x_j^{(i)}$$\n",
"* $y^{(i)}=-1$ oraz $o^{(i)}=1$ : $$\\Delta\\theta_j = \\alpha(-1 - 1)x_j^{(i)} = -2 \\alpha x_j^{(i)}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Czyli: przesuń wagi w odpowiednią stronę."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Perceptron – zalety\n",
"\n",
"* intuicyjny i prosty\n",
"* łatwy w implementacji\n",
"* jeżeli dane można liniowo oddzielić, algorytm jest zbieżny w skończonym czasie"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Perceptron – wady\n",
"\n",
"* jeżeli danych nie można oddzielić liniowo, algorytm nie jest zbieżny"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"def plot_perceptron():\n",
" plt.figure(figsize=(12,3))\n",
"\n",
" plt.subplot(131)\n",
" plt.ylim(-0.2,1.2)\n",
" plt.xlim(-0.2,1.2)\n",
"\n",
" plt.title('AND')\n",
" plt.plot([1,0,0], [0,1,0], 'ro', markersize=10)\n",
" plt.plot([1], [1], 'go', markersize=10)\n",
"\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_color('none')\n",
" ax.spines['top'].set_color('none')\n",
" ax.xaxis.set_ticks_position('none')\n",
" ax.spines['bottom'].set_position(('data',0))\n",
" ax.yaxis.set_ticks_position('none')\n",
" ax.spines['left'].set_position(('data',0))\n",
"\n",
" plt.xticks(np.arange(0, 2, 1.0))\n",
" plt.yticks(np.arange(0, 2, 1.0))\n",
"\n",
"\n",
" plt.subplot(132)\n",
" plt.ylim(-0.2,1.2)\n",
" plt.xlim(-0.2,1.2)\n",
"\n",
" plt.plot([1,0,1], [0,1,1], 'go', markersize=10)\n",
" plt.plot([0], [0], 'ro', markersize=10)\n",
"\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_color('none')\n",
" ax.spines['top'].set_color('none')\n",
" ax.xaxis.set_ticks_position('none')\n",
" ax.spines['bottom'].set_position(('data',0))\n",
" ax.yaxis.set_ticks_position('none')\n",
" ax.spines['left'].set_position(('data',0))\n",
"\n",
" plt.title('OR')\n",
" plt.xticks(np.arange(0, 2, 1.0))\n",
" plt.yticks(np.arange(0, 2, 1.0))\n",
"\n",
"\n",
" plt.subplot(133)\n",
" plt.ylim(-0.2,1.2)\n",
" plt.xlim(-0.2,1.2)\n",
"\n",
" plt.title('XOR')\n",
" plt.plot([1,0], [0,1], 'go', markersize=10)\n",
" plt.plot([0,1], [0,1], 'ro', markersize=10)\n",
"\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_color('none')\n",
" ax.spines['top'].set_color('none')\n",
" ax.xaxis.set_ticks_position('none')\n",
" ax.spines['bottom'].set_position(('data',0))\n",
" ax.yaxis.set_ticks_position('none')\n",
" ax.spines['left'].set_position(('data',0))\n",
"\n",
" plt.xticks(np.arange(0, 2, 1.0))\n",
" plt.yticks(np.arange(0, 2, 1.0))\n",
"\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAqwAAADGCAYAAAAXMtIlAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAWVUlEQVR4nO3df5DUd33H8dd7cRO5XJYcaUQn1YEh0Qop2IYqmBACiS1YQ1J6RB1Bo2PUOUwnV5z6MzVJUepUClYnJiZq6FG1sM4kqYBOmhFsMFQ5M57lgt4dXEhik4YcJnCXHyu8+8f3e7C3t7e3y+339nN3z8fMdzb3/X73u5/95vPm+9rvfr+fNXcXAAAAEKpUrRsAAAAAlEJgBQAAQNAIrAAAAAgagRUAAABBI7ACAAAgaARWAAAABI3ACgAAgKARWEeJmW03Mzez/ymxjsfTz4ZYfmW8fFPB/HvznutmljOzHjP7VbzsXWY2qdrvCZhozOwdZpY1s6fM7GUzO2JmPzazj5jZq4qs311QmyfM7Fkz22lmy2rxHoDxwMw+GdfU14ZY/iYz6zOzg2Z2TsGyt5lZi5k9bmYvmdlRM/upma01s8lDbO/hIrXcY2a7zOy9SbxHDGT8cEDyzOx1kp5Q9AHBJL3V3X9eZL38/xmN7v79guVXSvqxpK+4+8158++V9AFJd0l6On6djKQ3S1ooabKkn0l6t7t3V+t9ARNFHEbvkvQhSccl/UDSIUnnS1om6fWS9kl6l7s/k/e8bkmvlfSP8ayzFdXlNYrq9APu/q+j8y6A8SM+CbNH0lslLXH3XQXLHpb0tiLLvijp05JelrRT0q8l1Uv6c0kXS+qQ9E537yx4vYclXSZpvaRXJKUlXSTpOklnSfqsu38xgbeKfu7OlPAk6VOSXNKX48evD7GeS/qtpBclPSZpUsHyK+N1NhXMvzee/5Yi2zxfUku8/ICk+lrvDyamsTZJ2hDX0COSXluw7GxJX89b/qq8Zd2Sfldke9fH6x+u9XtjYhqrk6IPfy9JOijpnLz5n4zr66sF6zfnHQtnFixLSbo1Xt4h6dyC5Q/Hy+oL5l8u6YSkXkmTa71PxvPEJQGj4wZJz0v6nKTfSHqPmb16iHX/T9LXJP1R/LwRcffnJL1f0oOS3iTp4yPdJjCRmNkbJd0s6TlJy9396fzl7v6ypCZJP5E0X9G3HcPZpuhM7evN7ILqthiYGNz9MUl/L2mGpC9JkpnNknSbpC5FJ4sUzz9f0j8oCrjXuHtXwbZOuvutkr6r6Mzp35bZhocldUqqU3TcRkIIrAkzs8sUBcWsu78kaYuk8yStKPG09YoC7udLBNuyefQxsP+riutHuj1ggvmAon8rv+HuzxZboaDGPljmdi1+zI2secCEtkHSf0tqMrOrJW1W9BX9h9y9N2+96yWdI2mbu3eU2N4X4scPnUFbqOUEEViT13/waokftyj6WmHIg5q790j6J0XXxa2pUjt+qqiY5ha7OQTAkN4ePz40zHo/kfR7SX9Wxk2O71F08HzM3X83wvYBE5a7n1D0beTLiq4tnyfpX9z9JwWrllXH7r5f0jOS3hDff1KSmS1SdO3rEUWXEiAhBJcExXcmXi/psKKDmdz9kJn9VNISM3uDux8e4umbFH19/2kzu9vdXxhJW9z9FTPrkTRN0lRFlx4AGN5r48cnS63k7i+a2XOKaux8na6xV5vZrfF/ny1plqKbrl5U9T6QAhOWux8wsy2SPqwobH6myGpl1XHeOtMkvU7S/xYs+4yZvaIoP10s6a8knZT08fjyICSEwJqsRknnSvpa/JVhvxZFdxveIOn2Yk90914zW6foetZPKLpOZ6Rs+FUAjEB/jeXX+9mSPl+w3ouSlrn77lFpFTCOmdkMSf1DS01TdDb1P0eyyfix2DBKny74+4Sk97r7thG8HsrAJQHJ6v/af0vB/K2KhsW4wcxKhchvKLr7sdnMXjOShpjZ2YrOrJ6Q1DOSbQETTP9NVn9YaqX4evOpimo7v8aed3dzd5M0RdGB9aSkrWZWcpsASouPod9UdInNzYpuqrrbzOoLVi2rjmMXFjwn37lxLddLeqeimzG/bWZzKm07KkNgTYiZzZR0Rfzn/vwBhxUdzM5SdGfjlUNtw91zis7M1CsaYWAk3q7ojPov3f33I9wWMJE8Ej9eNcx6VyiqsZ/H19UN4u4vuPv3JH1E0msUDYcF4Mw1SVos6R53/4qiY+Z0nR77uF9ZdRyPMjBN0ZBzhZcDnOLuve6+U9JKRWH528OcgMIIEViTc4OirxV+rOjTX+F0f7zecHcUf0dSm6SPKirCisVF1P81xr+fyTaACWyzoq8Gb4yHxhmkoMa+PdwG3f07kvZKepeZLaxWQ4GJJL4U4EuKfphnbTx7g6IfymmKb4jqt1XRpTgr4xNKQ+m//vVb5bQhvrnr+5L+VNHNlEgIgTUBZpZSNBTOCUnvc/cPF06KbsZ6TtJfm1lmqG25+0lJn1V0Rrbis6xmNlXRAfcdin7R446K3xAwgbn7ryV9VdIfSLrfzKblLzezs+LlVyoaXqfcX666LX68tSoNBSaQ+EPitxSd3byx/8bk+NuNDyq6NOebZlYXzz+i6OzrqyX9Rxx287eXMrPPSXqfonFV/7mC5tym6EPtLfHxHwngpqtkXK1oSKrtQ32lEN+1/2+S/kbSuyXdPdTG3P0H8c/CXT7M637MzJ5WdGY3o2gQ40Ua+NOsxyt9MwD0CUXjJ79fUoeZFf406xsktUq6Nr6UZ1ju/kMz+5miEUMujwcgB1CeNYo+JN7j7j/KX+Du7WZ2u6IxVdfp9I8AfFnRB8+/k9RuZjsU/ZhP/0+zvlFRWF3m7sfKbYi7/8rM7lM0YsD1kr43gveFIdjAm9dRDWb2XUVfDax092yJ9d4i6VFJe919QXx96y/d/S1F1r1c0n/Ff37F3W/OW3avBv66zglJxxQNzdEqKStpR3y2FsAZMrO/UHR5znxFB77jii7Z+a6kbxWGVTPrlnSeu583xPb+UtHYkQ+5+9UJNh0YN+Kzo79SdD/IJcWGfYzHG98r6U8kXe7uj+QtW6Bo2MiFiq5XfVHRz7VmJd3h7n1FtvewotF9zi124ifveN4u6Y853lYfgRUAAABB41oLAAAABI3ACgAAgKARWAEAABA0AisAAACCRmAFAABA0IYLrD6a09KlS0f19ZiYzmAK3ajuD2qWKfApdKO6P6hXpjEwDSmoM6xHjhypdRMAVICaBcYO6hVjWVCBFQAAAChEYAUAAEDQCKwAAAAIGoEVAAAAQSOwAgAAIGgEVgAAAASNwAoAAICgEVgBAAAQNAIrAAAAgkZgBQAAQNAIrAAAAAgagRUAAABBI7ACAAAgaARWAAAABI3ACgAAgKBVPbA++eSTuummm7RgwQLV1dXJzNTd3V3tlwFQBdQrMHZQr5jIqh5YOzs7tXXrVjU0NGjhwoXDP6GrS2pqkjIZqbU1emxqiuYDRXT1dKlpe5My6zNK3ZZSZn1GTdub1NVDn6lUxfWqgfu/9bet7H+URL1WD/WKUZGfy1KpYHKZuXup5SUXFnPy5EmlUlEOvueee3TjjTfq0KFDmj59+uCVd+6UGhulXE7K5TRP0j5JSqejKZuVli2rtAkYx3Z27FTjtkblTuSUO5k7NT+dSis9Ka3syqyWXZxon7EkN14FFdVsRfWqIvv/LkkfHdX9jzGEeh0W9YqwFOSyU0Yvlw1Zs1U/w9pfTMPq6op2Sl/fwJ0iRX/39UXLOdOKWFdPlxq3Naov1zfg4CdJuZM59eX61LitkTMHFSi7XsX+R2XoL9VHvSJRgeey2t10tWHD4B1SKJeTNm4cnfYgeBse2aDcidJ9Jncip4176TNJYP+jEvSX2mL/o2KB57LaBdYtW8rbMS0to9MeBG9L25ZBZwoK5U7m1NJGn0kC+x+VoL/UFvsfFQs8l9UusB4/Xt31MO4df6W8vlDueqgM+x+VoL/UFvsfFQs8l9UusNbXV3c9jHv1Z5XXF8pdD5Vh/6MS9JfaYv+jYoHnsto
"text/plain": [
"<Figure size 864x216 with 3 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"plot_perceptron()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Funkcje aktywacji\n",
"\n",
"Zamiast funkcji bipolarnej możemy zastosować funkcję sigmoidalną jako funkcję aktywacji."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"def plot_activation_functions():\n",
" plt.figure(figsize=(16,7))\n",
" plt.subplot(121)\n",
" x = [-2,-.23,2] \n",
" y = [-1, -1, 1]\n",
" plt.ylim(-1.2,1.2)\n",
" plt.xlim(-2.2,2.2)\n",
" plt.plot([-2,2],[1,1], color='black', ls=\"dashed\")\n",
" plt.plot([-2,2],[-1,-1], color='black', ls=\"dashed\")\n",
" plt.step(x, y, lw=3)\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_color('none')\n",
" ax.spines['top'].set_color('none')\n",
" ax.xaxis.set_ticks_position('bottom')\n",
" ax.spines['bottom'].set_position(('data',0))\n",
" ax.yaxis.set_ticks_position('left')\n",
" ax.spines['left'].set_position(('data',0))\n",
"\n",
" plt.annotate(r'$\\theta_0$',\n",
" xy=(-.23,0), xycoords='data',\n",
" xytext=(-50, +50), textcoords='offset points', fontsize=26,\n",
" arrowprops=dict(arrowstyle=\"->\"))\n",
"\n",
" plt.subplot(122)\n",
" x2 = np.linspace(-2,2,100)\n",
" y2 = np.tanh(x2+ 0.23)\n",
" plt.ylim(-1.2,1.2)\n",
" plt.xlim(-2.2,2.2)\n",
" plt.plot([-2,2],[1,1], color='black', ls=\"dashed\")\n",
" plt.plot([-2,2],[-1,-1], color='black', ls=\"dashed\")\n",
" plt.plot(x2, y2, lw=3)\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_color('none')\n",
" ax.spines['top'].set_color('none')\n",
" ax.xaxis.set_ticks_position('bottom')\n",
" ax.spines['bottom'].set_position(('data',0))\n",
" ax.yaxis.set_ticks_position('left')\n",
" ax.spines['left'].set_position(('data',0))\n",
"\n",
" plt.annotate(r'$\\theta_0$',\n",
" xy=(-.23,0), xycoords='data',\n",
" xytext=(-50, +50), textcoords='offset points', fontsize=26,\n",
" arrowprops=dict(arrowstyle=\"->\"))\n",
"\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA4sAAAGKCAYAAACl0NTvAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOzdd3hUZfrG8XvSOxAIEHrokEgNCCKigIIi4KJiWVxAWXApKygqKEVWFHVRwZ+A4NqWoiAugqAioOCKsvSW0CGhk0ACCenJnN8fWQazQwlkMicz8/1cVy7zvmfOzJM45Mmdc857LIZhCAAAAACA3/MyuwAAAAAAQNlDWAQAAAAA2CEsAgAAAADsEBYBAAAAAHYIiwAAAAAAOz7X2c5SqUAxdO/eXd9//73ZZQCuwGJ2AW6A3gwUA70ZKLar9maOLAIOcPbsWbNLAAAAv0NvBkqOsAgAAAAAsENYBAAAAADYISwCAAAAAOwQFgEAAAAAdgiLAAAAAAA7hEUAAAAAgB3CIgAAAADADmERAAAAAGCHsAgAAAAAsENYBAAAAADYISwCAAAAAOwQFgEAAAAAdgiLAAAAAAA7hEUAAAAAgB3CIgAAAADADmERAAAAAGCHsAgAAAAAsENYBAAAAADYISwCAAAAAOwQFgEAAAAAdgiLAAAAAAA7hEUAAAAAgB3CIgAAAADADmERAAAAAGCHsAgAAAAAsENYhNs4fvy4RowYofbt2ysoKEgWi0UJCQnF2tdqtWrKlCmqU6eOAgIC1Lx5c3311VelWzAAAG6O3gy4NsIi3MbBgwe1aNEiVahQQR07dryhfcePH69XXnlFw4cP13fffad27drp4Ycf1rfffltK1QIA4P7ozYBrsxiGca3t19wIlCVWq1VeXoV///jHP/6hP//5zzpy5Ijq1Klzzf2SkpJUs2ZNjRkzRpMmTbLNd+nSRcnJydq5c+d1Xzs2NlabN28uUf2Ah7CYXYAboDfDZdCbAZdw1d7MkUW4jUvN6EatXLlSubm56tevX5H5fv36adeuXTpy5IgjygMAwOPQmwHX5rQji3feeafdXN++fTV06FBlZmbqvvvus9s+YMAADRgwQGfPntVDDz1kt/0vf/mLHnnkER07dkxPPPGE3fbnnntOPXv21L59+zRkyBC77ePGjVPXrl21fft2jRw50m7766+/rttuu02//vqrXnrpJbvt06ZNU4sWLbR69WpNnjzZbvvs2bPVqFEjffPNN3r77bftts+dO1c1a9bUwoULNWvWLLvtixcvVqVKlfTpp5/q008/tdv+7bffKigoSDNnztSiRYvstq9du1aSNHXqVC1fvrzItsDAQH333XeSpFdffVVr1qwpsr1ixYq26wLGjh2r3377rcj2GjVqaN68eZKkkSNHavv27UW2N2zYUHPmzJEkDR48WPv37y+yvUWLFpo2bZqkwh/8x48fL7K9ffv2mjJliiTpwQcf1Llz54ps79Kli8aPHy9Juvfee5WVlVVke0REhBYvXqwjR45owIABtvkLkbE6X6ODDG8/OdKpz0Yqsv80hz4nUBYlvNGjpE/BkcWSozfTm92qN1/y+/de48aNdfz4cd1xxx227QMGDFDTpk116623KiYmRhUrViyy//++97Zs2aLWrVvbtvPeWyvJM997999/v0aPHi3JvX7uXfp/6gAcWQQklUpQBAAAjpWfny8fHx+7+fDwcNv2K5kzZ4569OihLVu2KC8vr1RrBDwB1yzCLV3tuog6Y1aUyutxZBGegiOLZQK9GS7pRq5Z/POf/6zly5fr1KlTReYPHDighg0b6p///OcVj+D8HtcsAsV21d5s/ycbwENc+qX3xRdf1PTp05WVlSWL5fK/lY0bN+rWW2/V8uXL1aPHtX9Bjl09UZtL/ks0AABQ4RHE1NRUGYZRpDenpqbatgPuKCe/QGcv5io5PUdn03N09mKOzmXk6uzFHKVk5ColI1fnLuYqNTNXF3PytXPiPUX+jTgaYREeLzo6Wjk5OTp06JDq169vm4+Pj5ckNW3a1KzSAADwSPRmuBvDMHQ+M08nL2Tp9IVsnbyQrdMXsnQmLUdn0rJ1Ji1bSek5Op95Y6dPZ+YWKNi/9CIdYREer3v37vLz89P8+fM1ceJE2/y8efMUExOjqKgoE6sDAMDz0Jvhii5k5ikxJUPHUrJ0LDVTx1IydTw1SyfOZ+nk+Sxl5hY4/DVTMnIJi0BxLV68WJK0ZcsWSdJ3332niIgIRUREqFOnTlfcp3Llyho1apSmTJmi0NBQtWrVSgsXLtSPP/6opUuXOq12AADcUXF6s4+Pj/r376+PPvpIEr0ZZVdmbr4OJ2fo8NkMHU6+qCNnM5RwLlOJ5zJu+KjglXh7WVQx2E+Vw/xVKaTwo2KInyoF+ys82E/hIX6qGOxX+Hmwn4L8SjfOERbhVh5++OEi46FDh0qSOnXqdM3lhV977TWFhIRo+vTpOn36tBo1aqRFixapZ8+epVkuAABurzi9uaCgQAUFRY+60JthpqzcAu0/k659p9O1/0y6DiRd1MGkizpxPuv6O19FsJ+3IssHKrJcgCLLBahqucLPq4T5q3JogKqEBSg82E/eXmVnLThWQ4VH+f1qqA5Y1dGGFdeAYis7HdB10ZuBYqA3o7iS03O0++QFxZ9MK/w4laaEcxm6dkyy5+/jpdoVg1QrPEg1KgSpZniQalQIVI0KgapePlDlAn1LdTGaEmA1VAAAAACeLT07TzuOXdCO4+e18/h57Tx+QacuZBd7f28vi2pXDFLdSiGqFxGsqEqFH7UrBqtyqL+8ytBRQUcgLAIAAABwO4Zh6GhKpjYeSdHWo6namnhe+5PSi3XE0Msi1akYrEZVQ9WoaqgaVglV/cohqlMxWH4+XqVffBlBWAQAAADg8gzD0JGzGfrt8Dn9duicNiWk6ExaznX3C/D1UtPIMEVXK6foamFqWi1MDauEKsDX2wlVl22ERQAAAAAu6ezFHP1y4Kx+PpCsXw+e0+m0a59S6mWRGlcNU8ta5dW8Rnk1q1lO9SNC5OPtOUcLbwRhEQAAAIBLKLAa2nH8vH7ck6S1+5O0+0TaNR8f6u+j1nUqqE2dcLWqVUHNapQr1fsSuhu+UwAAAADKrKzcAv18IFk/xJ3R2n1JOpeRe9XHhvr76Na6FdW+XkW1qxuuxlXDytStKFwNYREAAABAmXIxJ19r9pzR97tPa+2+ZGXlFVzxcd5eFrWqVV53NIhQx4YRiqkWximlDkRYBAAAAGC67LwCrd2XpGU7TmrNniTl5Fuv+LhKIf7q3DhCnRtXVof6lRQa4OvkSj0HYREAAACAKQzD0KaEVP1r63Gt2HlK6Tn5V3xc/coh6hZdRfc0rapbqpdzu/sZllWERQAAAABOdfpCthZtPqbFW47raErmFR/TuGqoetwSqXtvqar6lUOdXCEkwiIAAAAAJyiwGlq7L0mfbzyqH/cmyWrYP6ZOxSD1al5NPZtXU4MqBESzERYBAAAAlJrzmblauOmY/vlbok6cz7LbHhbgo/ubV9ODrWqoVa3yslg4xbSsICwCAAAAcLiDSen66JcjWrLthLLz7Berua1eRT3atpbuaVpFAb7eJlSI6yEsAgAAAHCISwvWzPn5kFbvSbLbXiHIV31ja+rRtrUUVSnYhApxIwiLAAAAAErEMAz9tC9J7/94UFuPnrfb3jQyTAM61FGv5tU4iuhCCIsAAAAAborVamjVnjP6vx8PaPeJNLvtXZtU0eA76qpNnQpci+iCCIsAAAAAbohhGFq7L1l/X7lP8aeKhkQ/by/1aVVdgzrWVf3KISZVCEcgLAIAAAAoto1HUvT3lXu1KSG1yLy/j5f+eGttDelUV1XCAkyqDo5EWAQAAABwXQeTLmrKt3u0Zm/RhWsCfL30p/Z1NKhjlCqHEhLdCWERAAAAwFWdu5ij6WsOaP5/jqrAatjmfb0terxtLQ3rXJ+Q6KYIiwAAAADs5BdYNXdDot5ZtV/p2fm2eYtF+kOL6hp1d0PVDA8ysUKUNsIiAAAAgCI2JaR
"text/plain": [
"<Figure size 1152x504 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"plot_activation_functions()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Perceptron a regresja liniowa"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"reglin.png\" width=\"70%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Uczenie regresji liniowej:\n",
"* Model: $$h_{\\theta}(x) = \\sum_{i=0}^n \\theta_ix_i$$\n",
"* Funkcja kosztu (błąd średniokwadratowy): $$J(\\theta) = \\frac{1}{m} \\sum_{i=1}^{m} (h_{\\theta}(x^{(i)}) - y^{(i)})^2$$\n",
"* Po obliczeniu $\\nabla J(\\theta)$ - zwykły SGD."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Perceptron a dwuklasowa regresja logistyczna"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"reglog.png\" width=\"60%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Uczenie dwuklasowej regresji logistycznej:\n",
"* Model: $h_{\\theta}(x) = \\sigma(\\sum_{i=0}^n \\theta_ix_i) = P(1|x,\\theta)$\n",
"* Funkcja kosztu (entropia krzyżowa): $$\\begin{eqnarray} J(\\theta) &=& -\\frac{1}{m} \\sum_{i=1}^{m} \\big( y^{(i)}\\log P(1|x^{(i)},\\theta) \\\\ && + (1-y^{(i)})\\log(1-P(1|x^{(i)},\\theta)) \\big) \\end{eqnarray}$$\n",
"* Po obliczeniu $\\nabla J(\\theta)$ - zwykły SGD."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Perceptron a wieloklasowa regresja logistyczna"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"multireglog.png\" width=\"40%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Wieloklasowa regresja logistyczna\n",
"* Model (dla $c$ klasyfikatorów binarnych): \n",
"$$\\begin{eqnarray}\n",
"h_{(\\theta^{(1)},\\dots,\\theta^{(c)})}(x) &=& \\mathrm{softmax}(\\sum_{i=0}^n \\theta_{i}^{(1)}x_i, \\ldots, \\sum_{i=0}^n \\theta_i^{(c)}x_i) \\\\ \n",
"&=& \\left[ P(k|x,\\theta^{(1)},\\dots,\\theta^{(c)}) \\right]_{k=1,\\dots,c} \n",
"\\end{eqnarray}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Funkcja kosztu (**przymując model regresji binarnej**): $$\\begin{eqnarray} J(\\theta^{(k)}) &=& -\\frac{1}{m} \\sum_{i=1}^{m} \\big( y^{(i)}\\log P(k|x^{(i)},\\theta^{(k)}) \\\\ && + (1-y^{(i)})\\log P(\\neg k|x^{(i)},\\theta^{(k)}) \\big) \\end{eqnarray}$$\n",
"* Po obliczeniu $\\nabla J(\\theta)$, **c-krotne** uruchomienie SGD, zastosowanie $\\mathrm{softmax}(X)$ do niezależnie uzyskanych klasyfikatorów binarnych."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Przyjmijmy: \n",
"$$ \\Theta = (\\theta^{(1)},\\dots,\\theta^{(c)}) $$\n",
"\n",
"$$h_{\\Theta}(x) = \\left[ P(k|x,\\Theta) \\right]_{k=1,\\dots,c}$$\n",
"\n",
"$$\\delta(x,y) = \\left\\{\\begin{array}{cl} 1 & \\textrm{gdy } x=y \\\\ 0 & \\textrm{wpp.}\\end{array}\\right.$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Wieloklasowa funkcja kosztu $J(\\Theta)$ (kategorialna entropia krzyżowa):\n",
"$$ J(\\Theta) = -\\frac{1}{m}\\sum_{i=1}^{m}\\sum_{k=1}^{c} \\delta({y^{(i)},k}) \\log P(k|x^{(i)},\\Theta) $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Gradient $\\nabla J(\\Theta)$:\n",
"$$ \\dfrac{\\partial J(\\Theta)}{\\partial \\Theta_{j,k}} = -\\frac{1}{m}\\sum_{i = 1}^{m} (\\delta({y^{(i)},k}) - P(k|x^{(i)}, \\Theta)) x^{(i)}_j \n",
"$$\n",
"\n",
"* Liczymy wszystkie wagi jednym uruchomieniem SGD"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"## Podsumowanie\n",
"\n",
"* W przypadku jednowarstowej sieci neuronowej wystarczy znać gradient funkcji kosztu.\n",
"* Wtedy liczymy tak samo jak w przypadku regresji liniowej, logistycznej, wieloklasowej logistycznej itp. (wymienione modele to szczególne przypadki jednowarstwowych sieci neuronowych).\n",
"* Regresja liniowa i binarna regresja logistyczna to jeden neuron.\n",
"* Wieloklasowa regresja logistyczna to tyle neuronów, ile klas."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Funkcja aktywacji i funkcja kosztu są **dobierane do problemu**."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.2. Funkcje aktywacji"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Każda funkcja aktywacji ma swoje zalety i wady.\n",
"* Różne rodzaje funkcji aktywacji nadają się do różnych zastosowań."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/pawel/.local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
" from ._conv import register_converters as _register_converters\n",
"Using TensorFlow backend.\n"
]
}
],
"source": [
"%matplotlib inline\n",
"\n",
"import math\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import random\n",
"\n",
"import keras\n",
"from keras.datasets import mnist\n",
"from keras.models import Sequential\n",
"from keras.layers import Dense, Dropout, SimpleRNN, LSTM\n",
"from keras.optimizers import Adagrad, Adam, RMSprop, SGD\n",
"\n",
"from IPython.display import YouTubeVideo"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"def plot(fun):\n",
" x = np.arange(-3.0, 3.0, 0.01)\n",
" y = [fun(x_i) for x_i in x]\n",
" fig = plt.figure(figsize=(14, 7))\n",
" ax = fig.add_subplot(111)\n",
" fig.subplots_adjust(left=0.1, right=0.9, bottom=0.1, top=0.9)\n",
" ax.set_xlim(-3.0, 3.0)\n",
" ax.set_ylim(-1.5, 1.5)\n",
" ax.grid()\n",
" ax.plot(x, y)\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Funkcja logistyczna"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$$ g(x) = \\frac{1}{1 + e^{-x}} $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Przyjmuje wartości z przedziału $(0, 1)$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Funkcja logistyczna – wykres"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA1cAAAG2CAYAAACTRXz+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzt3XmsXudh3/nfw7vvGy/3VRu1Wou1\n2HAWOk4cx0jjpq1bezpN0mXUdhq0M0Axk04GCaaDAl2AWVNMa6RB00GRNJhOWs/UbdI0ZdJ2YFmW\nLVnWZslcxEUSl7vxbrzbmT/el1ekxCvJvke8l+TnA7x43/O+5/I8NB6R/Pqc87ylqqoAAACwPls2\negAAAAA3A3EFAABQA3EFAABQA3EFAABQA3EFAABQA3EFAABQg1riqpTya6WUs6WUb6/x+eFSymQp\n5dnm45fqOC4AAMBm0VrTr/OPk/xKkn/yHvv8h6qqfrKm4wEAAGwqtZy5qqrqD5OM1fFrAQAA3Iiu\n5z1XHy+lPFdK+dellPuu43EBAAA+dHVdFvh+vpFkf1VV06WUzyb5F0nuvNaOpZQnkzyZJJ2dnR/d\nt2/fdRoiN4qVlZVs2WItFq5mXrAWc4NrMS9Yi7nBtXznO985X1XV6PvtV6qqquWApZQDSf7fqqru\n/wD7Hk/yaFVV599rv0OHDlWvvPJKLePj5nHkyJEcPnx4o4fBJmNesBZzg2sxL1iLucG1lFKeqarq\n0ffb77pkeSllRymlNF8/3jzuhetxbAAAgOuhlssCSym/keRwkq2llFNJfjlJW5JUVfUPkvyJJH+5\nlLKUZC7JF6q6TpkBAABsArXEVVVVX3yfz38ljaXaAQAAbkru1gMAAKiBuAIAAKiBuAIAAKiBuAIA\nAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiB\nuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIA\nAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiB\nuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIA\nAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiB\nuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKiBuAIAAKhBLXFVSvm1UsrZUsq31/i8\nlFL+t1LKa6WUb5VSHqnjuAAAAJtFXWeu/nGSz7zH5z+R5M7m48kk/0dNxwUAANgUaomrqqr+MMnY\ne+zyuST/pGr4apLBUsrOOo4NAACwGVyve652Jzl5xfap5nsAAAA3hdaNHsA7lVKeTOPSwYyOjubI\nkSMbOyA2nenpafOCdzEvWIu5wbWYF6zF3GA9rldcnU6y94rtPc333qWqqi8l+VKSHDp0qDp8+PCH\nPjhuLEeOHIl5wTuZF6zF3OBazAvWYm6wHtfrssAvJ/mZ5qqBH0syWVXVG9fp2AAAAB+6Ws5clVJ+\nI8nhJFtLKaeS/HKStiSpquofJPlKks8meS3JbJI/W8dxAQAANota4qqqqi++z+dVkr9Sx7EAAAA2\no023oAUAAMCHpaqqXFpaycylpcwuLGdmYSkzl5Yyc2k5swuN55mFt7d//L4dH/jXFlcAAMCmd2lp\nOdPzS5m+tJSLzefp+aXMLFy9/fbni1e9N7uwvPq8vFJ9oGOWkuwd7v7AYxRXAADAh+bymaKpucVM\nzi1man4xU3NLmZpvbs8t5uIVETQ9v3T1dvP1wvLK+x6rZUtJb0drejta09fZeB7sbs+e4e70tLek\nu73xXndHS3raW9Pd3tLcbl39vKejJT0drelpb01n25aUUvKnPuDvVVwBAADvaXG5EUdT80vvGUlT\n80tXvG5+Prf4vmHU0bplNYZ6m8+7Brve9d7qo7M1fVe+39mavo621RjaKOIKAABuESsrVS7OL2Vi\nbiHjs4sZn13IZPN5fHYxE7MLmWhuT1zx+cVLS+/567ZuKRnoakv/5Udna3YPdTXe62xLf1dr+jvb\n3t6ns3X1dV9nazpaW67T/wIfLnEFAAA3oKqqMjW/lAvTlzI2s5ALMwu5ML2QsZlLzTBqxNL47EIm\n5hYz0dxe63ajUpL+zrYMdbdloLs9I73tuWNbbwa72zLY1Z7B7mtFUuP1Rp8x2izEFQAAbAKXY2ls\nZiEXpi/lwszCNV5fjqhLGZ9dyOLytUupq60lQ91tGexuz1BPW3YOdGWwuy1D3Y1IGuxuf/vz5vNA\nV1tatgik9RBXAADwIamqKhOzizk3fSnnLjYeZy/Or76+MLOQ882zTWMza8dSb0drRnrbM9zTnt2D\nnfnI7oEM97ZnpKe9+X7H6uuh7vZ0tt0cl9ndaMQVAAB8j+YXl5uh1Iym6Us5NzV/VURdfv9awdTZ\ntiWjfR3Z2tuR3YOdeWB3f0Z6G4E03NO++los3VjEFQAANC2tVDk9MZc3J+fz1tT828/N142IunTN\nBR5KSUZ6OjLa15FtfR25c3tfRvs6Mtr79nujzUdvR6t7lG5C4goAgJteVVW5eGkpb02+HUpvR9Ol\nvDU1nzcm53Nh+lKq3/39q362vWVLtg90ZEd/Z+7Z0Z8fuvPtSLocT9v6OzLc3Z7Wli0b9DtkMxBX\nAADc8OYXl3NmYi5nJuZzZmIupyfmGtuTc3ljohFRswvL7/q5we627OjvzPb+zty7sz/z42/mYw/e\nvfrejoHODHW3OcvEByKuAADY1FZWqlyYWXg7mK6Mp2ZMXZhZuOpnSkm29XVk50BX7t7Zlx8+NJod\nzVi6/Ly9v/Nd9zIdOTKWw4/vu56/PW4i4goAgA21vFLlran5nBybzcnxuZwcm70qpM5MzmdhaeWq\nn+lub8nuwa7sGuzK/bsHsnuwM7ua27sHu7K9vzPtrS7R4/oSVwAAfKguL0d+cnw2J8fm8vrYbPP1\nbE6Nz+X0+FwWlt+Op1KS7X2d2T3UlQf2DObH7+9shNTA2/HU32VBCDYfcQUAwLrNLy7n5NhsI5yu\nOAP1ejOgpt+xut5Qd1v2Dnfn3p39+fR927NvuDt7h7qzd7g7uwY709Fq6XFuPOIKAIAPZG5hOSfG\nZnL8/GyOX5jJiQszOXZ+JicuzOaNyfmr9u1s27IaS08cHM7e4cbrxntd6ets26DfBXx4xBUAAKtm\nF5Zy4sJsjp+fyfELs1cF1JtTVwfUcE97Dox05+O3jeTA1p7sH+nOnmY8jfZ2uGyPW464AgC4xSwt\nr+T1sdkcPTeT756bztFzMzl2YSbHz8/k7MVLV+27tbc9+0d68ok7tubASHf2b+3JwZGe7BvpzkCX\ns09wJXEFAHCTmpxdzGvnpnP03HS+e26m+Tyd18dms7hcre63tbc9B7f25IfuGs2Bke4c2NqTAyON\nM1Eu34MPTlwBANzAlpZXcmp8bvUM1JXPV373U1tLyf6RntyxrTefvm9Hbh/tzW2jPbl9a28GugUU\n1EFcAQDcAJaWV3L8wmxefetivvPWdF49ezGvvjWdY+dnrlrGfKSnPbeN9uTH7t3eiKfR3tw22pu9\nQ11pbfG9T/BhElcAAJvI5Yh67Wwjor7zViOijp6fvupSvr3DXblrW18O3z2a20d7m4+eDHa3b+Do\n4dYmrgAANsDS8kpOjL19Juo7b13Ma2cbl/RdeSbqckR98u5tuXNbb+7a3pfbt/Wku90/42Cz8V8l\nAMCH7Pz0pbz8xsW8/OZUXmo+v3p2OgtL746oHz40mru29YkouAH5rxUAoCbzi8t57ex0Xn7zYl5+\nY6rx/OZUzk+/vbDEtr6
"text/plain": [
"<matplotlib.figure.Figure at 0x7fdda9490fd0>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plot(lambda x: 1 / (1 + math.exp(-x)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Tangens hiperboliczny"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$$ g(x) = \\tanh x = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}} $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Przyjmuje wartości z przedziału $(-1, 1)$.\n",
"* Powstaje z funkcji logistycznej przez przeskalowanie i przesunięcie."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Tangens hiperboliczny – wykres"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA1cAAAG2CAYAAACTRXz+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzs3Xd0HOW9h/Hvu7vq1bKKZVtyt9xx\nw6ZjUw0pQCgBQjc4cOEGktz0nnADaeSShCQQejVOaCZ0AqLjinuVu+Si3tuW9/6htRFGsmVrpFlJ\nz+ecPTuzO9r5kTNH8pPZnTXWWgEAAAAAOsfj9gAAAAAA0BsQVwAAAADgAOIKAAAAABxAXAEAAACA\nA4grAAAAAHAAcQUAAAAADnAkrowxDxpjio0xa9p5fpYxpsoYsyJ8+6kT+wUAAACASOFz6HUelvQX\nSY8eYpv3rLVfdGh/AAAAABBRHDlzZa19V1K5E68FAAAAAD1Rd37m6nhjzEpjzCvGmPHduF8AAAAA\n6HJOvS3wcJZLGmKtrTXGnCvpeUmj2trQGDNP0jxJio2NnZabm9tNI6KnCIVC8ni4Fgs+i+MC7eHY\nQFs4LtAejg20ZdOmTaXW2ozDbWestY7s0BgzVNK/rbUTOrDtdknTrbWlh9ouLy/Pbty40ZH50Hvk\n5+dr1qxZbo+BCMNxgfZwbKAtHBdoD8cG2mKMWWatnX647boly40xA4wxJrw8I7zfsu7YNwAAAAB0\nB0feFmiMeUrSLEnpxphCST+TFCVJ1tq/S7pI0k3GmICkBkmXWqdOmQEAAABABHAkrqy1lx3m+b+o\n5VLtAAAAANAr8Wk9AAAAAHAAcQUAAAAADiCuAAAAAMABxBUAAAAAOIC4AgAAAAAHEFcAAAAA4ADi\nCgAAAAAcQFwBAAAAgAOIKwAAAABwAHEFAAAAAA4grgAAAADAAcQVAAAAADiAuAIAAAAABxBXAAAA\nAOAA4goAAAAAHEBcAQAAAIADiCsAAAAAcABxBQAAAAAOIK4AAAAAwAHEFQAAAAA4gLgCAAAAAAcQ\nVwAAAADgAOIKAAAAABxAXAEAAACAA4grAAAAAHAAcQUAAAAADiCuAAAAAMABxBUAAAAAOIC4AgAA\nAAAHEFcAAAAA4ADiCgAAAAAcQFwBAAAAgAOIKwAAAABwAHEFAAAAAA4grgAAAADAAcQVAAAAADiA\nuAIAAAAABxBXAAAAAOAA4goAAAAAHEBcAQAAAIADiCsAAAAAcABxBQAAAAAOIK4AAAAAwAHEFQAA\nAAA4gLgCAAAAAAcQVwAAAADgAOIKAAAAABxAXAEAAACAA4grAAAAAHAAcQUAAAAADiCuAAAAAMAB\nxBUAAAAAOIC4AgAAAAAHEFcAAAAA4ADiCgAAAAAcQFwBAAAAgAOIKwAAAABwAHEFAAAAAA5wJK6M\nMQ8aY4qNMWvaed4YY/5kjCkwxqwyxkx1Yr8AAAAAECmcOnP1sKQ5h3j+HEmjwrd5kv7m0H4BAAAA\nICI4ElfW2ncllR9ik/MkPWpbfCwp1RiT7cS+AQAAACAS+LppP4Mk7Wq1Xhh+bE837R8AAABAL2St\nVchKwZBVyFoFQ1ZBaxUKhZfD68GQVSikT5ftp8+HPvPYZ19rdFZSh2fprrjqMGPMPLW8dVAZGRnK\nz893dyBEnNraWo4LfA7HBdrDsYG2cFygPX3x2LDWKmilYEjyh6SAtQqE1OrWsu4Ph4m/jecCISlg\nJX8oHCe25bHQ/tcOv37IhuMmvL7/uQNRc2Bd4SjSZ7YNHfSzofDzXWnuhOgOb9tdcVUkKafV+uDw\nY59jrb1P0n2SlJeXZ2fNmtXlw6Fnyc/PF8cFDsZxgfZwbKAtHBdoTyQcG6GQVYM/qPrmoOqbA6pr\nCqoxEFSjP6gmf0iN/v3r4WV/SA3+oJr8wQPr+7c/sE0g9Jnn/cGQmgMhNQVblq3DgeLzGPm8Rj6P\nJ3zfsuz1GEV5Tfi+Zd3n9SjGYz77Mwf9vNdjFOXxyOs1ivIYeT2eA6/jC697PZLHY+Q1LY97wvcH\nbsa0PO/Rp8/tf2z/zxx4TAceG5qeoJ929L/b2f8Z27VQ0i3GmPmSZkqqstbylkAAAAD0eNZa1TcH\nVdMYUE2jX9WNAVU3+lXTGFB9U0B1zcFP78OxVN/c/uP1zcGjmiPa51Gsz6PYKK9io7yKi/IqNsqj\nmCivUuKiFJsUo9gor2J8HsVEeRTl9Sja51G0t+UWtX+51X3UgXujaJ9HMZ95rO3to7xGxhiH/1fu\nGRyJK2PMU5JmSUo3xhRK+pmkKEmy1v5d0suSzpVUIKle0rVO7BcAAABwgj8YUkV9s4pqQlq0tUwV\n9X5VNTSruuHzwVTT6G95vGn/ekDB0OFP/UR7PYqP8Soh2qf4aK/iY3xKiPYqNT5aCTFexUe3rO9/\nfP99XDiWYsOx9Jlln/dAMHk8fTNoIokjcWWtvewwz1tJNzuxLwAAAOBQgiGr8rpmldQ0qbS2SRX1\nzaqoa1ZFvV+V9S33FfXNqmx1X9sU+PQFPvj4M69njJQY41NybJSSYlvus1NiNTo2UclxLY8lxX56\nn9zqPiHGp4Ron+KivYr2OfUtSIhUEXdBCwAAAOBgoZBVWatg2n9rWW9utdyk8rpmtXciKTnWp34J\n0UqNj1b/xGiNzExUanyU+sVHq198lHZvL9CJ0ye3PJYQ3RJI0T7OCqFDiCsAAAC4yh8MqbimSXur\nGrW3qlF7qhpa7qsbta+qUXuqGrWvulGBNoopxudRRlKM0hNjNLhfvKbkpiojMUbp4cfSE2OUltAS\nTilxUfJ5D332KL9pu04ald5V/6no5YgrAAAAdKlGf1CFFfXaVdGgwvJ6FVY0aFdFvYoqGrSnqlEl\ntU2fu1pdbJRH2SlxGpAcq5nD0jQgJVZZybHKSIo5EFPpidFKjPH12YsnIPIQVwAAAOgUa61Kapq0\ntbRO20vrtLNVQBVWNKikpukz20f7PBrcL06DUuM0ZkCyBqTEHrhlp8QqOzlOyXFEE3oe4goAAAAd\nUtXg17bSOm0rrdW2kjptLa3TtnBQ1bW6fLjXYzQwNVY5/eI1Oy9DOf3ilZMWr5y0OA3uF6+MxBg+\nw4ReibgCAADAZ5TWNmnT3hpt2lejTcW12ryvRltL6lRW13xgG4+RBveL17D0BB07NE3DMxI0tH+C\nhqUnKDsl9rCfbQJ6I+IKAACgj6pu9GvDnnBEHbjVqrxVRKXERWl0VqLOGp+lYekJGpaeqGHpLWei\nYnxeF6cHIg9xBQAA0AeU1DRp7e4qrd1dfeB+R1n9gecTY3walZWos8ZlaVRWkvKykjQ6K1EZSTF8\n9gnoIOIKAACglymuadSKnZVaXfRpTO2r/vSiErlp8Ro/MFkXTxus8QNTNHpAkgamxBJRQCcRVwAA\nAD1YUyCodbur9cnOSn2yq1Kf7KxQYUWDpJbPRY3MTNSJI9I1bmCyxg9M0biByUqJi3J5aqB3Iq4A\nAAB6kH3VjVq8rVzLd1bok52VWre7Ws3BkCRpYEqspuT20zUnDNWU3FSNH5ii2Cg+FwV0F+IKAAAg\nQllrVVjRoEXbyrV4W5kWbSs/8Dmp2CiPJg1O1bUnDdWUnH6akpuqrORYlycG+jbiCgAAIILsKq/X\nBwWlWrStXIu2lml3VaMkKTU+SscOTdOVxw3RjGFpGpudrCgudw5EFOIKAADARdWNfn1YUKb3C0r0\n/uZSbQ+fmUpPjNbMYf114/A0zRiWptGZSXzxLhDhiCsAAIBuFAiGtLKwUu9uKtV7m0u0srBKwZBV\nfLRXxw/vr6tPGKqTR6VrREYiV+8DehjiCgAAoItVN/r1zsYSvbWhWG9vLFZlvV/GSJMGp+qmU0fo\n5FHpmpLbT9E+3uYH9GTEFQAAQBfYUVanN9cX6z/r92nxtnIFQlb94qN0Wl6mThubqZNGpis1Ptrt\nMQE4iLgCAABwgLVW6/Z
"text/plain": [
"<matplotlib.figure.Figure at 0x7fdda93e9590>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plot(lambda x: math.tanh(x))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### ReLU (*Rectifier Linear Unit*)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$$ g(x) = \\max(0, x) $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### ReLU – zalety\n",
"* Mniej podatna na problem zanikającego gradientu (*vanishing gradient*) niż funkcje sigmoidalne, dzięki czemu SGD jest szybciej zbieżna.\n",
"* Prostsze obliczanie gradientu.\n",
"* Dzięki zerowaniu ujemnych wartości, wygasza neurony, „rozrzedzając” sieć (*sparsity*), co przyspiesza obliczenia."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"#### ReLU – wady\n",
"* Dla dużych wartości gradient może „eksplodować”.\n",
"* „Wygaszanie” neuronów."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### ReLU – wykres"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA1cAAAG2CAYAAACTRXz+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAGJhJREFUeJzt3X+sZOd91/HPN/6RRQk0AS+NYzvE\npdalpkBLLNcRVbkmTnFMFTelkWwhkQDVtgirBQmBi0UCgUitkABBItpVY9VJo6QWqZul2eIkda7d\nCNzaiZzWXmfdxQR2t4lN7CT0KmnM1g9/7Gx7vZ659+7Oc2fOzLxe0tXOj7PzPH98vfbb58zZaq0F\nAACA6bxk3hsAAABYBuIKAACgA3EFAADQgbgCAADoQFwBAAB0IK4AAAA66BJXVXVnVT1dVY9OeH+9\nqr5WVY+Mft7RY10AAIChuLDT5/x8kvckef82x/x6a+0HOq0HAAAwKF3OXLXWHkjybI/PAgAAWESz\n/M7V66vqc1X1q1X152e4LgAAwJ7rdVngTj6b5M+01jar6qYkv5zkqnEHVtWBJAeSZN++fa97zWte\nM6Mtsiief/75vOQl7sXCC5kLJjEbjGMukme+0fJ7/6/lWy6uvHJfzXs7g2E2GOeJJ574cmtt/07H\nVWuty4JV9dokv9Ja+85dHPuFJNe01r683XFra2vt6NGjXfbH8tjY2Mj6+vq8t8HAmAsmMRuMs8pz\n0VrLP//oo/mFB/93fvSvfltuv/HPpUpcnbHKs8FkVfWZ1to1Ox03kyyvqlfV6J/aqrp2tO4zs1gb\nAIDThBXsrS6XBVbVh5KsJ7mkqk4keWeSi5KktfYzSX44yd+vqlNJvpHkltbrlBkAADsSVrD3usRV\na+3WHd5/T07fqh0AgBkTVjAbvq0HALDEhBXMjrgCAFhSwgpmS1wBACwhYQWzJ64AAJaMsIL5EFcA\nAEtEWMH8iCsAgCUhrGC+xBUAwBIQVjB/4goAYMEJKxgGcQUAsMCEFQyHuAIAWFDCCoZFXAEALCBh\nBcMjrgAAFoywgmESVwAAC0RYwXCJKwCABSGsYNjEFQDAAhBWMHziCgBg4IQVLAZxBQAwYMIKFoe4\nAgAYKGEFi0VcAQAMkLCCxSOuAAAGRljBYhJXAAADIqxgcYkrAICBEFaw2MQVAMAACCtYfOIKAGDO\nhBUsB3EFADBHwgqWh7gCAJgTYQXLRVwBAMyBsILlI64AAGZMWMFyElcAADMkrGB5iSsAgBkRVrDc\nxBUAwAwIK1h+4goAYI8JK1gN4goAYA8JK1gd4goAYI8IK1gt4goAYA8IK1g94goAoDNhBatJXAEA\ndCSsYHWJKwCAToQVrDZxBQDQgbACxBUAwJSEFZCIKwCAqQgr4AxxBQBwnoQVsJW4AgA4D8IKOJu4\nAgA4R8IKGEdcAQCcA2EFTCKuAAB2SVgB2xFXAAC7IKyAnYgrAIAdCCtgN8QVAMA2hBWwW+IKAGAC\nYQWcC3EFADCGsALOlbgCADiLsALOh7gCANhCWAHnS1wBAIwIK2Aa4goAIMIKmJ64AgBWnrACehBX\nAMBKE1ZAL+IKAFhZwgroqUtcVdWdVfV0VT064f2qqv9QVceq6req6i/3WBcA4HwJK6C3Xmeufj7J\njdu8/6YkV41+DiT5T53WBQA4Z8IK2AsX9viQ1toDVfXabQ65Ocn7W2styYNV9YqqurS19sUe6wMA\n7FZrLR848lzuOy6sgL5m9Z2ry5Ic3/L8xOg1AICZOXPG6r7jp4QV0F2XM1c9VdWBnL50MPv378/G\nxsZ8N8TgbG5umgtexFwwidngjD86Y3UqN1zWct2+L+X++5+a97YYGH9mMI1ZxdXJJFdseX756LUX\naa0dTHIwSdbW1tr6+vqeb47FsrGxEXPB2cwFk5gNkq1nrE5fCnjdvi/l+uuvn/e2GCB/ZjCNWV0W\neCjJ3x7dNfC6JF/zfSsAYBbcvAKYlS5nrqrqQ0nWk1xSVSeSvDPJRUnSWvuZJIeT3JTkWJKvJ/k7\nPdYFANiOsAJmqdfdAm/d4f2W5B/0WAsAYDeEFTBrs7osEABgZoQVMA/iCgBYKsIKmBdxBQAsDWEF\nzJO4AgCWgrAC5k1cAQALT1gBQyCuAICFJqyAoRBXAMDCElbAkIgrAGAhCStgaMQVALBwhBUwROIK\nAFgowgoYKnEFACwMYQUMmbgCABaCsAKGTlwBAIMnrIBFIK4AgEETVsCiEFcAwGAJK2CRiCsAYJCE\nFbBoxBUAMDjCClhE4goAGBRhBSwqcQUADIawAhaZuAIABkFYAYtOXAEAcyesgGUgrgCAuRJWwLIQ\nVwDA3AgrYJmIKwBgLoQVsGzEFQAwc8IKWEbiCgCYKWEFLCtxBQDMjLAClpm4AgBmQlgBy05cAQB7\nTlgBq0BcAQB7SlgBq0JcAQB7RlgBq0RcAQB7QlgBq0ZcAQDdCStgFYkrAKArYQWsKnEFAHQjrIBV\nJq4AgC6EFbDqxBUAMDVhBSCuAIApCSuA08QVAHDehBXAHxFXAMB5EVYALySuAIBzJqwAXkxcAQDn\nRFgBjCeuAIBdE1YAk4krAGBXhBXA9sQVALAjYQWwM3EFAGxLWAHsjrgCACYSVgC7J64AgLGEFcC5\nEVcAwIsIK4BzJ64AgBcQVgDnR1wBAH9IWAGcP3EFACQRVgDTElcAgLAC6EBcAcCKE1YAfYgrAFhh\nwgqgH3EFACtKWAH0Ja4AYAUJK4D+xBUArBhhBbA3xBUArBBhBbB3usRVVd1YVUer6lhV3T7m/bdX\n1f+pqkdGPz/SY10AYPeEFcDeunDaD6iqC5K8N8kbk5xI8lBVHWqtHTnr0F9srd027XoAwLkTVgB7\nr8eZq2uTHGutPdlaey7Jh5Pc3OFzAYAOhBXAbEx95irJZUmOb3l+Isn3jDnub1bV9yV5Isk/aq0d\nH3NMqupAkgNJsn///mxsbHTYIstkc3PTXPAi5oJJVn02Wmv5wJHnct/xU7npyoty3b4v5f77n5r3\ntuZu1eeCycwG0+gRV7vxX5J8qLX2zar60SR3Jflr4w5srR1McjBJ1tbW2vr6+oy2yKLY2NiIueBs\n5oJJVnk2zpyxuu+4M1ZnW+W5YHtmg2n0uCzwZJIrtjy/fPTaH2qtPdNa++bo6c8leV2HdQGACVwK\nCDB7PeLqoSRXVdWVVXVxkluSHNp6QFVduuXpm5M83mFdAGAMYQUwH1NfFthaO1VVtyW5N8kFSe5s\nrT1WVe9K8nBr7VCSH6+qNyc5leTZJG+fdl0A4MWEFcD8dPnOVWvtcJLDZ732ji2PfzLJT/ZYCwAY\nT1gBzFeXv0QYAJgvYQUwf+IKABacsAIYBnEFAAtMWAEMh7gCgAUlrACGRVwBwAISVgDDI64AYMEI\nK4BhElcAsECEFcBwiSsAWBDCCmDYxBUALABhBTB84goABk5YASwGcQUAAyasABaHuAKAgRJWAItF\nXAHAAAkrgMUjrgBgYIQVwGISVwAwIMIKYHGJKwAYCGEFsNjEFQAMgLACWHziCgDmTFgBLAdxBQBz\nJKwAloe4AoA5EVYAy0VcAcAcCCuA5SOuAGDGhBXAchJXADBDwgpgeYkrAJgRYQWw3MQVAMyAsAJY\nfuIKAPaYsAJYDeIKAPaQsAJYHeIKAPaIsAJYLeIKAPaAsAJYPeIKADoTVgCrSVwBQEfCCmB1iSsA\n6ERYAaw2cQUAHQgrAMQVAExJWAGQiCsAmIqwAuAMcQUA50lYAbCVuAKA8yCsADibuAKAcySsABhH\nXAHAORBWAEwirgBgl4QVANsRVwCwC8IKgJ2IKwDYgbACYDfEFQBsQ1gBsFviCgAmEFYAnAtxBQBj\nCCsAzpW4AoCzCCsAzoe4AoAthBUA50tcAcCIsAJgGuIKACKsAJieuAJg5QkrAHoQVwCsNGEFQC/i\nCoCVJawA6ElcAbCShBU
"text/plain": [
"<matplotlib.figure.Figure at 0x7fdda936c6d0>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plot(lambda x: max(0, x))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Softplus"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$$ g(x) = \\log(1 + e^{x}) $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Wygładzona wersja ReLU."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Softplus – wykres"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA1cAAAG2CAYAAACTRXz+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzt3Xd0XOWd//HPV733YlmWuyz3go0J\nJWCHXoIDgQCppKxDyi7pIWEDu+THhkAKKbtJ2IQEshAgEEIzmBaFboONuy13W5Zly+oa9dE8vz80\nNsZItrCudUfS+3WOjmdGF93vnvOs7HfunWfMOScAAAAAQP9E+T0AAAAAAAwFxBUAAAAAeIC4AgAA\nAAAPEFcAAAAA4AHiCgAAAAA8QFwBAAAAgAc8iSszu9vMqsxsXS/fX2BmDWa2Kvx1kxfnBQAAAIBI\nEePRz/mTpF9Luvcox7zsnLvEo/MBAAAAQETx5MqVc+4lSbVe/CwAAAAAGIwG8j1Xp5rZajN72sym\nDeB5AQAAAOCE8+q2wGNZKWmMcy5gZhdJ+ruk4p4ONLPFkhZLUkJCwtzRo0cP0IgYLEKhkKKi2IsF\n78a6QG9YG+iJX+uiMyQdaAmpIyRlxJsy4m3AZ8DR8TsDPdm8eXO1cy73WMeZc86TE5rZWElPOuem\n9+HYnZLmOeeqj3ZcSUmJKysr82Q+DB2lpaVasGCB32MgwrAu0BvWBnrix7p4em2lvv3wGsVEm35+\n1WwtLMkb0POjb/idgZ6Y2Qrn3LxjHTcgV67MbISk/c45Z2bz1X07Ys1AnBsAAMBPnV0h3fb0Jv3h\nlR2aVZSh//nESSrMSPR7LAAngCdxZWZ/kbRAUo6Z7ZF0s6RYSXLO/VbSFZK+ZGZBSa2SrnZeXTID\nAACIUPsa2vSV+1dqxa46XXvaWH3/oimKi+GWM2Co8iSunHPXHOP7v1b3Vu0AAADDwitbqnX9A2+r\nrbNLv7pmjj48a6TfIwE4wQZqQwsAAIBhIRRy+tWLW3XnC5tVnJei//nEXE3MS/F7LAADgLgCAADw\nSG1zh7724Cq9tPmALptTqFsvm66kOP65BQwX/H87AACAB97eXaev3LdS1YEO/ddlM3TN/CKZsdU6\nMJwQVwAAAP3gnNM9r+3UrUs2akR6gh750mmaMSrd77EA+IC4AgAAOE6B9qC++8gaPbWmUudMydNP\nr5yt9KRYv8cC4BPiCgAA4DiU7WvSl+5boZ3Vzbrhwsla/MHxioriNkBgOCOuAAAA3qe/vlWumx5b\nr5SEGN3/Lx/QB8Zn+z0SgAhAXAEAAPRRc3tQP3hsnf62skKnjs/WL66ZrbzUBL/HAhAhiCsAAIA+\n2LSvUV+5b6W2Vzfra+cU618/VKxobgMEcBjiCgAA4Cicc3rwzXLd/Ph6pSXG6r4vnKLTJuT4PRaA\nCERcAQAA9CLQHtSNj67VY6v26oyJOfr5VbOVmxrv91gAIhRxBQAA0IP1exv0r/e/rZ01zfrmuZP0\n5YUTuQ0QwFERVwAAAIdxzun/lu3WD5/coMykWHYDBNBnxBUAAEBYY1unvve3tXpqTaXOnJSrn39s\nlrJTuA0QQN8QVwAAAJLW7mnQV/+yUnvqWvWdC0p03ZkT+FBgAO8LcQUAAIY155zufX2Xbn1qo7JT\n4vTA4g/o5LFZfo8FYBAirgAAwLBV39Kh7z6yRkvX79eHJufpJ1fOUlZynN9jARikiCsAADAsLdte\no689uErVgXZ9/6LJ+sIZ47kNEEC/EFcAAGBYCXaF9OiWDj2x9A2NzkrSI186TTNHZfg9FoAhgLgC\nAADDRkV9q77+wCot39mpy+cU6paPTFdKPP8cAuANfpsAAIBh4Zl1lfruI2sV7ArpX2bE6carZvs9\nEoAhhrgCAABDWltnl3745Abdt2y3Zo5K1y+vnqOd6970eywAQxBxBQAAhqyyfU3617+s1Ob9AS0+\nc7y+dV6J4mKitNPvwQAMScQVAAAYcpxz+r9lu/X/ntyg1IQY3fO5+TprUq7fYwEY4ogrAAAwpBz+\n2VVnTsrVT6+cpdzUeL/HAjAMEFcAAGDIWL6jVtc/8LaqA+268aIp+vwZ4/jsKgADhrgCAACDXmdX\nSHc+v1m/Kd3GZ1cB8A1xBQAABrXtBwL6+oOrtHpPg66cO0o3XzqNz64C4At+8wAAgEHJOacH3izX\nLU9sUFxMlH7ziZN04YwCv8cCMIwRVwAAYNCpCbTrhr+t1XMb9uv0idn66ZWzNSI9we+xAAxzxBUA\nABhUSsuq9O2H16ihpVP/fvEUfe50Nq0AEBmIKwAAMCi0dXbptqc36U+v7VRJfqru/dx8TSlI83ss\nADiEuAIAABFvw95Gfe3Bt7V5f0CfPX2svnvBZCXERvs9FgC8C3EFAAAiVijk9IdXduiOpWVKT4rV\nPZ+br7Mm5fo9FgD0iLgCAAARqbKhVd/662q9urVG503N120fnams5Di/xwKAXhFXAAAg4jy5Zq9u\nfHSdOoIh3Xb5DF11cpHM2LQCQGQjrgAAQMSob+nQDx5brydW79WsogzdedVsjctJ9nssAOgT4goA\nAESE0rIqfefhNapt7tA3z52kLy2YoJjoKL/HAoA+I64AAICvmtuDunXJRt2/bLcm5afo7mtP1vTC\ndL/HAoD3jbgCAAC+eXNnrb750GqV17Xoi2eO19fPncQW6wAGLeIKAAAMuLbOLv38uc266+XtGpWZ\nqAcXn6r547L8HgsA+oW4AgAAA2r93gZ948HVKtvfpGvmj9aNF09RSjz/JAEw+PGbDAAADIhgV0i/\n/ec23fn8FmUlx+mP156shZPz/B4LADxDXAEAgBNu+4GAvvHQaq0qr9clMwv0w0XTlckHAgMYYogr\nAABwwoRCTve+vlO3PbNJ8THR+uU1c3TprJF+jwUAJwRxBQAATohdNc369sNrtHxHrc6alKvbr5ip\n/LQEv8cCgBOGuAIAAJ4KhZzueX2nbn+mTDFRpts/OlNXzhslM/N7NAA4oYgrAADgmZ3VzfrOw2u0\nfGf31arbPjpDBemJfo8FAAOCuAIAAP0WCjn98bWdumPpJsVGR+mOK2bqirlcrQIwvBBXAACgX3ZU\nN+s7D6/WmzvrtLAkVz+6fKZGpPPeKgDDD3EFAACOS1fI6Y+v7tAdS8sUFxOln1w5Sx89qZCrVQCG\nLeIKAAC8b9sPBPTth9doxa46fWhynv7rshlcrQIw7BFXAACgz7pCTne/skM/ebZM8TFR+tnHZumy\nOVytAgCJuAIAAH20sbJRNzyyRqv3NOjsyXn6r8tn8LlVAHAY4goAABxVe7BLv35xq35Tuk1pibH6\nxdWzdemskVytAoAjEFcAAKBXb+2s1XcfWaNtB5p1+ZxC/fslU5WVHOf3WAAQkYgrAADwHoH2oG5/\nZpP+/MYujUxP1J8+e7IWlOT5PRYARDTiCgAAvMs/NlXpxkfXqrKxTZ85day+dX6JUuL5JwMAHIsn\nvynN7G5Jl0iqcs5N7+H7JukXki6S1CLpWufcSi/ODQAAvFETaNctT27QY6v2amJeih6+7jTNHZPp\n91gAMGh49T9D/UnSryXd28v3L5RUHP46RdJvwn8CAACfOef02Kq9uuXJDWpq69T1ZxfrywsnKD4m\n2u/RAGBQ8SSunHMvmdnYoxyySNK9zjkn6Q0zyzCzAudcpRfnBwAAx6e8tkU3PbZO/yg7oNlFGfrx\nR2eqZESq32MBwKA0UDdQF0oqP+z5nvBrxBUAAD7o7Arp7ld26M7nt8hM+sElU3XtaWMVHcX26gBw\nvCLu3almtljSYknKzc1VaWmpvwMh4gQCAdYF3oN1gd6wNt5ra32X7lnfofKmkObkReuTU+KUHdyl\nl1/a5fdoA4Z1gd6wNtAfAxVXFZKKDns+Kvzaezjn7pJ0lySVlJS4BQsWnPDhMLiUlpaKdYEjsS7Q\nG9bGOxpaO3XH0k26b9l
"text/plain": [
"<matplotlib.figure.Figure at 0x7fdde8452e10>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plot(lambda x: math.log(1 + math.exp(x)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Problem zanikającego gradientu (*vanishing gradient problem*)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Sigmoidalne funkcje aktywacji ograniczają wartości na wyjściach neuronów do niewielkich przedziałów ($(-1, 1)$, $(0, 1)$ itp.).\n",
"* Jeżeli sieć ma wiele warstw, to podczas propagacji wstecznej mnożymy przez siebie wiele małych wartości → obliczony gradient jest mały.\n",
"* Im więcej warstw, tym silniejszy efekt zanikania."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Sposoby na zanikający gradient\n",
"\n",
"* Modyfikacja algorytmu optymalizacji (*RProp*, *RMSProp*)\n",
"* Użycie innej funckji aktywacji (ReLU, softplus)\n",
"* Dodanie warstw *dropout*\n",
"* Nowe architektury (LSTM itp.)\n",
"* Więcej danych, zwiększenie mocy obliczeniowej"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.3. Wielowarstwowe sieci neuronowe\n",
"\n",
"czyli _Artificial Neural Networks_ (ANN) lub _Multi-Layer Perceptrons_ (MLP)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"nn1.png\" width=\"70%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Architektura sieci\n",
"\n",
"* Sieć neuronowa jako graf neuronów. \n",
"* Organizacja sieci przez warstwy.\n",
"* Najczęściej stosowane są sieci jednokierunkowe i gęste."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* $n$-warstwowa sieć neuronowa ma $n+1$ warstw (nie liczymy wejścia).\n",
"* Rozmiary sieci określane poprzez liczbę neuronów lub parametrów."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Sieć neuronowa jednokierunkowa (*feedforward*)\n",
"\n",
"* Mając daną $n$-warstwową sieć neuronową oraz jej parametry $\\Theta^{(1)}, \\ldots, \\Theta^{(L)} $ oraz $\\beta^{(1)}, \\ldots, \\beta^{(L)} $ liczymy:<br/><br/> \n",
"$$a^{(l)} = g^{(l)}\\left( a^{(l-1)} \\Theta^{(l)} + \\beta^{(l)} \\right). $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"nn2.png\" width=70%/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Funkcje $g^{(l)}$ to tzw. **funkcje aktywacji**.<br/>\n",
"Dla $i = 0$ przyjmujemy $a^{(0)} = \\mathrm{x}$ (wektor wierszowy cech) oraz $g^{(0)}(x) = x$ (identyczność)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Parametry $\\Theta$ to wagi na połączeniach miedzy neuronami dwóch warstw.<br/>\n",
"Rozmiar macierzy $\\Theta^{(l)}$, czyli macierzy wag na połączeniach warstw $a^{(l-1)}$ i $a^{(l)}$, to $\\dim(a^{(l-1)}) \\times \\dim(a^{(l)})$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Parametry $\\beta$ zastępują tutaj dodawanie kolumny z jedynkami do macierzy cech.<br/>Macierz $\\beta^{(l)}$ ma rozmiar równy liczbie neuronów w odpowiedniej warstwie, czyli $1 \\times \\dim(a^{(l)})$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* **Klasyfikacja**: dla ostatniej warstwy $L$ (o rozmiarze równym liczbie klas) przyjmuje się $g^{(L)}(x) = \\mathop{\\mathrm{softmax}}(x)$.\n",
"* **Regresja**: pojedynczy neuron wyjściowy jak na obrazku. Funkcją aktywacji może wtedy być np. funkcja identycznościowa."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Pozostałe funkcje aktywacji najcześciej mają postać sigmoidy, np. sigmoidalna, tangens hiperboliczny.\n",
"* Mogą mieć też inny kształt, np. ReLU, leaky ReLU, maxout."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2021-05-05 15:23:24 +02:00
"### Jak uczyć sieci neuronowe?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* W poznanych do tej pory algorytmach (regresja liniowa, regresja logistyczna) do uczenia używaliśmy funkcji kosztu, jej gradientu oraz algorytmu gradientu prostego (GD/SGD)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Dla sieci neuronowych potrzebowalibyśmy również znaleźć gradient funkcji kosztu."
2021-05-05 14:43:56 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2021-05-05 15:23:24 +02:00
"* Sprowadza się to do bardziej ogólnego problemu:<br/>jak obliczyć gradient $\\nabla f(x)$ dla danej funkcji $f$ i wektora wejściowego $x$?"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.4. Metoda propagacji wstecznej – wprowadzenie"
2021-05-05 14:43:56 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2021-05-05 15:23:24 +02:00
"### Pochodna funkcji\n",
2021-05-05 14:43:56 +02:00
"\n",
2021-05-05 15:23:24 +02:00
"* **Pochodna** mierzy, jak szybko zmienia się wartość funkcji względem zmiany jej argumentów:\n",
2021-05-05 14:43:56 +02:00
"\n",
2021-05-05 15:23:24 +02:00
"$$ \\frac{d f(x)}{d x} = \\lim_{h \\to 0} \\frac{ f(x + h) - f(x) }{ h } $$"
2021-05-05 14:43:56 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2021-05-05 15:23:24 +02:00
"### Pochodna cząstkowa i gradient\n",
2021-05-05 14:43:56 +02:00
"\n",
2021-05-05 15:23:24 +02:00
"* **Pochodna cząstkowa** mierzy, jak szybko zmienia się wartość funkcji względem zmiany jej *pojedynczego argumentu*."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* **Gradient** to wektor pochodnych cząstkowych:\n",
2021-05-05 14:43:56 +02:00
"\n",
2021-05-05 15:23:24 +02:00
"$$ \\nabla f = \\left( \\frac{\\partial f}{\\partial x_1}, \\ldots, \\frac{\\partial f}{\\partial x_n} \\right) $$"
2021-05-05 14:43:56 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2021-05-05 15:23:24 +02:00
"#### Gradient – przykłady"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$$ f(x_1, x_2) = x_1 + x_2 \\qquad \\to \\qquad \\frac{\\partial f}{\\partial x_1} = 1, \\quad \\frac{\\partial f}{\\partial x_2} = 1, \\quad \\nabla f = (1, 1) $$ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"$$ f(x_1, x_2) = x_1 \\cdot x_2 \\qquad \\to \\qquad \\frac{\\partial f}{\\partial x_1} = x_2, \\quad \\frac{\\partial f}{\\partial x_2} = x_1, \\quad \\nabla f = (x_2, x_1) $$ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"$$ f(x_1, x_2) = \\max(x_1 + x_2) \\hskip{12em} \\\\\n",
"\\to \\qquad \\frac{\\partial f}{\\partial x_1} = \\mathbb{1}_{x \\geq y}, \\quad \\frac{\\partial f}{\\partial x_2} = \\mathbb{1}_{y \\geq x}, \\quad \\nabla f = (\\mathbb{1}_{x \\geq y}, \\mathbb{1}_{y \\geq x}) $$ "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Własności pochodnych cząstkowych\n",
2021-05-05 14:43:56 +02:00
"\n",
2021-05-05 15:23:24 +02:00
"Jezeli $f(x, y, z) = (x + y) \\, z$ oraz $x + y = q$, to:\n",
"$$f = q z,\n",
"\\quad \\frac{\\partial f}{\\partial q} = z,\n",
"\\quad \\frac{\\partial f}{\\partial z} = q,\n",
"\\quad \\frac{\\partial q}{\\partial x} = 1,\n",
"\\quad \\frac{\\partial q}{\\partial y} = 1 $$"
2021-05-05 14:43:56 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
2021-05-05 15:23:24 +02:00
"slide_type": "subslide"
2021-05-05 14:43:56 +02:00
}
},
"source": [
2021-05-05 15:23:24 +02:00
"### Reguła łańcuchowa\n",
"\n",
"$$ \\frac{\\partial f}{\\partial x} = \\frac{\\partial f}{\\partial q} \\, \\frac{\\partial q}{\\partial x},\n",
"\\quad \\frac{\\partial f}{\\partial y} = \\frac{\\partial f}{\\partial q} \\, \\frac{\\partial q}{\\partial y} $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Propagacja wsteczna – prosty przykład"
2021-05-05 14:43:56 +02:00
]
},
{
"cell_type": "code",
2021-05-05 15:23:24 +02:00
"execution_count": 2,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
2021-05-05 14:43:56 +02:00
"outputs": [],
2021-05-05 15:23:24 +02:00
"source": [
"# Dla ustalonego wejścia\n",
"x = -2; y = 5; z = -4"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(3, -12)\n"
]
}
],
"source": [
"# Krok w przód\n",
"q = x + y\n",
"f = q * z\n",
"print(q, f)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[-4, -4, 3]\n"
]
}
],
"source": [
"# Propagacja wsteczna dla f = q * z\n",
"dz = q\n",
"dq = z\n",
"# Propagacja wsteczna dla q = x + y\n",
"dx = 1 * dq # z reguły łańcuchowej\n",
"dy = 1 * dq # z reguły łańcuchowej\n",
"print([dx, dy, dz])"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"exp1.png\" />"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Właśnie tak wygląda obliczanie pochodnych metodą propagacji wstecznej!"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Spróbujmy czegoś bardziej skomplikowanego:<br/>metodą propagacji wstecznej obliczmy pochodną funkcji sigmoidalnej."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Propagacja wsteczna – funkcja sigmoidalna"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Funkcja sigmoidalna:\n",
"\n",
"$$f(\\theta,x) = \\frac{1}{1+e^{-(\\theta_0 x_0 + \\theta_1 x_1 + \\theta_2)}}$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"$$\n",
"\\begin{array}{lcl}\n",
"f(x) = \\frac{1}{x} \\quad & \\rightarrow & \\quad \\frac{df}{dx} = -\\frac{1}{x^2} \\\\\n",
"f_c(x) = c + x \\quad & \\rightarrow & \\quad \\frac{df}{dx} = 1 \\\\\n",
"f(x) = e^x \\quad & \\rightarrow & \\quad \\frac{df}{dx} = e^x \\\\\n",
"f_a(x) = ax \\quad & \\rightarrow & \\quad \\frac{df}{dx} = a \\\\\n",
"\\end{array}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"exp2.png\" />"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0.3932238664829637, -0.5898357997244456]\n",
"[-0.19661193324148185, -0.3932238664829637, 0.19661193324148185]\n"
]
}
],
"source": [
"# Losowe wagi i dane\n",
"w = [2,-3,-3]\n",
"x = [-1, -2]\n",
"\n",
"# Krok w przód\n",
"dot = w[0]*x[0] + w[1]*x[1] + w[2]\n",
"f = 1.0 / (1 + math.exp(-dot)) # funkcja sigmoidalna\n",
"\n",
"# Krok w tył\n",
"ddot = (1 - f) * f # pochodna funkcji sigmoidalnej\n",
"dx = [w[0] * ddot, w[1] * ddot]\n",
"dw = [x[0] * ddot, x[1] * ddot, 1.0 * ddot]\n",
"\n",
"print(dx)\n",
"print(dw)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Obliczanie gradientów – podsumowanie"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Gradient $f$ dla $x$ mówi, jak zmieni się całe wyrażenie przy zmianie wartości $x$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Gradienty łączymy, korzystając z **reguły łańcuchowej**."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* W kroku \"wstecz\" gradienty informują, które części grafu powinny być zwiększone lub zmniejszone (i z jaką siłą), aby zwiększyć wartość na wyjściu."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* W kontekście implementacji chcemy dzielić funkcję $f$ na części, dla których można łatwo obliczyć gradienty."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.5. Uczenie wielowarstwowych sieci neuronowych metodą propagacji wstecznej"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Mając algorytm SGD oraz gradienty wszystkich wag, moglibyśmy trenować każdą sieć."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Niech $\\Theta = (\\Theta^{(1)},\\Theta^{(2)},\\Theta^{(3)},\\beta^{(1)},\\beta^{(2)},\\beta^{(3)})$\n",
"* Funkcja sieci neuronowej z grafiki:\n",
"$$\\small h_\\Theta(x) = \\tanh(\\tanh(\\tanh(x\\Theta^{(1)}+\\beta^{(1)})\\Theta^{(2)} + \\beta^{(2)})\\Theta^{(3)} + \\beta^{(3)})$$\n",
"* Funkcja kosztu dla regresji:\n",
"$$J(\\Theta) = \\dfrac{1}{2m} \\sum_{i=1}^{m} (h_\\Theta(x^{(i)})- y^{(i)})^2 $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Jak obliczymy gradienty?\n",
"\n",
"$$\\nabla_{\\Theta^{(l)}} J(\\Theta) = ? \\quad \\nabla_{\\beta^{(l)}} J(\\Theta) = ?$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### W kierunku propagacji wstecznej\n",
"\n",
"* Pewna (niewielka) zmiana wagi $\\Delta z^l_j$ dla $j$-ego neuronu w warstwie $l$ pociąga za sobą (niewielką) zmianę kosztu: \n",
"\n",
"$$\\frac{\\partial J(\\Theta)}{\\partial z^{l}_j} \\Delta z^{l}_j$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Jeżeli $\\frac{\\partial J(\\Theta)}{\\partial z^{l}_j}$ jest duża, $\\Delta z^l_j$ ze znakiem przeciwnym zredukuje koszt.\n",
"* Jeżeli $\\frac{\\partial J(\\Theta)}{\\partial z^l_j}$ jest bliska zeru, koszt nie będzie mocno poprawiony."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"* Definiujemy błąd $\\delta^l_j$ neuronu $j$ w warstwie $l$: \n",
"\n",
"$$\\delta^l_j := \\dfrac{\\partial J(\\Theta)}{\\partial z^l_j}$$ \n",
"$$\\delta^l := \\nabla_{z^l} J(\\Theta) \\quad \\textrm{ (zapis wektorowy)} $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Podstawowe równania propagacji wstecznej\n",
"\n",
"$$\n",
"\\begin{array}{rcll}\n",
"\\delta^L & = & \\nabla_{a^L}J(\\Theta) \\odot { \\left( g^{L} \\right) }^{\\prime} \\left( z^L \\right) & (BP1) \\\\[2mm]\n",
"\\delta^{l} & = & \\left( \\left( \\Theta^{l+1} \\right) \\! ^\\top \\, \\delta^{l+1} \\right) \\odot {{ \\left( g^{l} \\right) }^{\\prime}} \\left( z^{l} \\right) & (BP2)\\\\[2mm]\n",
"\\nabla_{\\beta^l} J(\\Theta) & = & \\delta^l & (BP3)\\\\[2mm]\n",
"\\nabla_{\\Theta^l} J(\\Theta) & = & a^{l-1} \\odot \\delta^l & (BP4)\\\\\n",
"\\end{array}\n",
"$$\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### (BP1)\n",
"$$ \\delta^L_j \\; = \\; \\frac{ \\partial J }{ \\partial a^L_j } \\, g' \\!\\! \\left( z^L_j \\right) $$\n",
"$$ \\delta^L \\; = \\; \\nabla_{a^L}J(\\Theta) \\odot { \\left( g^{L} \\right) }^{\\prime} \\left( z^L \\right) $$\n",
"Błąd w ostatniej warstwie jest iloczynem szybkości zmiany kosztu względem $j$-tego wyjścia i szybkości zmiany funkcji aktywacji w punkcie $z^L_j$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### (BP2)\n",
"$$ \\delta^{l} \\; = \\; \\left( \\left( \\Theta^{l+1} \\right) \\! ^\\top \\, \\delta^{l+1} \\right) \\odot {{ \\left( g^{l} \\right) }^{\\prime}} \\left( z^{l} \\right) $$\n",
"Aby obliczyć błąd w $l$-tej warstwie, należy przemnożyć błąd z następnej ($(l+1)$-szej) warstwy przez transponowany wektor wag, a uzyskaną macierz pomnożyć po współrzędnych przez szybkość zmiany funkcji aktywacji w punkcie $z^l$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### (BP3)\n",
"$$ \\nabla_{\\beta^l} J(\\Theta) \\; = \\; \\delta^l $$\n",
"Błąd w $l$-tej warstwie jest równy wartości gradientu funkcji kosztu."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### (BP4)\n",
"$$ \\nabla_{\\Theta^l} J(\\Theta) \\; = \\; a^{l-1} \\odot \\delta^l $$\n",
"Gradient funkcji kosztu względem wag $l$-tej warstwy można obliczyć jako iloczyn po współrzędnych $a^{l-1}$ przez $\\delta^l$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Algorytm propagacji wstecznej"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Dla pojedynczego przykładu $(x,y)$:\n",
"1. **Wejście**: Ustaw aktywacje w warstwie cech $a^{(0)}=x$ \n",
"2. **Feedforward:** dla $l=1,\\dots,L$ oblicz \n",
"$z^{(l)} = a^{(l-1)} \\Theta^{(l)} + \\beta^{(l)}$ oraz $a^{(l)}=g^{(l)} \\!\\! \\left( z^{(l)} \\right)$\n",
"3. **Błąd wyjścia $\\delta^{(L)}$:** oblicz wektor $$\\delta^{(L)}= \\nabla_{a^{(L)}}J(\\Theta) \\odot {g^{\\prime}}^{(L)} \\!\\! \\left( z^{(L)} \\right) $$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"4. **Propagacja wsteczna błędu:** dla $l = L-1,L-2,\\dots,1$ oblicz $$\\delta^{(l)} = \\delta^{(l+1)}(\\Theta^{(l+1)})^T \\odot {g^{\\prime}}^{(l)} \\!\\! \\left( z^{(l)} \\right) $$\n",
"5. **Gradienty:** \n",
" * $\\dfrac{\\partial}{\\partial \\Theta_{ij}^{(l)}} J(\\Theta) = a_i^{(l-1)}\\delta_j^{(l)} \\textrm{ oraz } \\dfrac{\\partial}{\\partial \\beta_{j}^{(l)}} J(\\Theta) = \\delta_j^{(l)}$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"W naszym przykładzie:\n",
"\n",
"$$\\small J(\\Theta) = \\frac{1}{2} \\left( a^{(L)} - y \\right) ^2 $$\n",
"$$\\small \\dfrac{\\partial}{\\partial a^{(L)}} J(\\Theta) = a^{(L)} - y$$\n",
"\n",
"$$\\small \\tanh^{\\prime}(x) = 1 - \\tanh^2(x)$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img src=\"nn3.png\" width=\"65%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Algorytm SGD z propagacją wsteczną\n",
"\n",
"Pojedyncza iteracja:\n",
"1. Dla parametrów $\\Theta = (\\Theta^{(1)},\\ldots,\\Theta^{(L)})$ utwórz pomocnicze macierze zerowe $\\Delta = (\\Delta^{(1)},\\ldots,\\Delta^{(L)})$ o takich samych wymiarach (dla uproszczenia opuszczono wagi $\\beta$)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"2. Dla $m$ przykładów we wsadzie (*batch*), $i = 1,\\ldots,m$:\n",
" * Wykonaj algortym propagacji wstecznej dla przykładu $(x^{(i)}, y^{(i)})$ i przechowaj gradienty $\\nabla_{\\Theta}J^{(i)}(\\Theta)$ dla tego przykładu;\n",
" * $\\Delta := \\Delta + \\dfrac{1}{m}\\nabla_{\\Theta}J^{(i)}(\\Theta)$\n",
"3. Wykonaj aktualizację wag: $\\Theta := \\Theta - \\alpha \\Delta$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Propagacja wsteczna – podsumowanie\n",
"\n",
"* Algorytm pierwszy raz wprowadzony w latach 70. XX w.\n",
"* W 1986 David Rumelhart, Geoffrey Hinton i Ronald Williams pokazali, że jest znacznie szybszy od wcześniejszych metod.\n",
"* Obecnie najpopularniejszy algorytm uczenia sieci neuronowych."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.6. Implementacja sieci neuronowych"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>łod.dł.</th>\n",
" <th>łod.sz.</th>\n",
" <th>pł.dł.</th>\n",
" <th>pł.sz.</th>\n",
" <th>Iris setosa?</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>5.2</td>\n",
" <td>3.4</td>\n",
" <td>1.4</td>\n",
" <td>0.2</td>\n",
" <td>1.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>5.1</td>\n",
" <td>3.7</td>\n",
" <td>1.5</td>\n",
" <td>0.4</td>\n",
" <td>1.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>6.7</td>\n",
" <td>3.1</td>\n",
" <td>5.6</td>\n",
" <td>2.4</td>\n",
" <td>0.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>6.5</td>\n",
" <td>3.2</td>\n",
" <td>5.1</td>\n",
" <td>2.0</td>\n",
" <td>0.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>4.9</td>\n",
" <td>2.5</td>\n",
" <td>4.5</td>\n",
" <td>1.7</td>\n",
" <td>0.0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>6.0</td>\n",
" <td>2.7</td>\n",
" <td>5.1</td>\n",
" <td>1.6</td>\n",
" <td>0.0</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" łod.dł. łod.sz. pł.dł. pł.sz. Iris setosa?\n",
"0 5.2 3.4 1.4 0.2 1.0\n",
"1 5.1 3.7 1.5 0.4 1.0\n",
"2 6.7 3.1 5.6 2.4 0.0\n",
"3 6.5 3.2 5.1 2.0 0.0\n",
"4 4.9 2.5 4.5 1.7 0.0\n",
"5 6.0 2.7 5.1 1.6 0.0"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas\n",
"src_cols = ['łod.dł.', 'łod.sz.', 'pł.dł.', 'pł.sz.', 'Gatunek']\n",
"trg_cols = ['łod.dł.', 'łod.sz.', 'pł.dł.', 'pł.sz.', 'Iris setosa?']\n",
"data = (\n",
" pandas.read_csv('iris.csv', usecols=src_cols)\n",
" .apply(lambda x: [x[0], x[1], x[2], x[3], 1 if x[4] == 'Iris-setosa' else 0], axis=1))\n",
"data.columns = trg_cols\n",
"data[:6]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[1. 5.2 3.4 1.4 0.2]\n",
" [1. 5.1 3.7 1.5 0.4]\n",
" [1. 6.7 3.1 5.6 2.4]\n",
" [1. 6.5 3.2 5.1 2. ]\n",
" [1. 4.9 2.5 4.5 1.7]\n",
" [1. 6. 2.7 5.1 1.6]]\n",
"[[1.]\n",
" [1.]\n",
" [0.]\n",
" [0.]\n",
" [0.]\n",
" [0.]]\n"
]
}
],
"source": [
"m, n_plus_1 = data.values.shape\n",
"n = n_plus_1 - 1\n",
"Xn = data.values[:, 0:n].reshape(m, n)\n",
"X = np.matrix(np.concatenate((np.ones((m, 1)), Xn), axis=1)).reshape(m, n_plus_1)\n",
"Y = np.matrix(data.values[:, n]).reshape(m, 1)\n",
"\n",
"print(X[:6])\n",
"print(Y[:6])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"scrolled": true,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/pawel/.local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
" from ._conv import register_converters as _register_converters\n",
"Using TensorFlow backend.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1/1\n",
"150/150 [==============================] - 0s 2ms/step - loss: 3.6282 - acc: 0.3333\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7f9bd195e190>"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from keras.models import Sequential\n",
"from keras.layers import Dense\n",
"\n",
"model = Sequential()\n",
"model.add(Dense(3, input_dim=5))\n",
"model.add(Dense(3))\n",
"model.add(Dense(1, activation='sigmoid'))\n",
"\n",
"model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n",
"\n",
"model.fit(X, Y)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"0.05484907701611519"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.predict(np.array([1.0, 3.0, 1.0, 2.0, 4.0]).reshape(-1, 5)).tolist()[0][0]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"150/150 [==============================] - 0s 293us/step\n",
"()\n",
"loss:\t3.4469\n",
"acc:\t0.3333\n"
]
}
],
"source": [
"scores = model.evaluate(X, Y)\n",
"print()\n",
"for i in range(len(scores)):\n",
" print('{}:\\t{:.4f}'.format(model.metrics_names[i], scores[i]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 4.7. Przykłady implementacji wielowarstwowych sieci neuronowych"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Przykład: MNIST\n",
"\n",
"_Modified National Institute of Standards and Technology database_"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"* Zbiór cyfr zapisanych pismem odręcznym\n",
"* 60 000 przykładów uczących, 10 000 przykładów testowych\n",
"* Rozdzielczość każdego przykładu: 28 × 28 = 784 piksele"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# źródło: https://github.com/keras-team/keras/examples/minst_mlp.py\n",
"\n",
"import keras\n",
"from keras.datasets import mnist\n",
"\n",
"# załaduj dane i podziel je na zbiory uczący i testowy\n",
"(x_train, y_train), (x_test, y_test) = mnist.load_data()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"def draw_examples(examples, captions=None):\n",
" plt.figure(figsize=(16, 4))\n",
" m = len(examples)\n",
" for i, example in enumerate(examples):\n",
" plt.subplot(100 + m * 10 + i + 1)\n",
" plt.imshow(example, cmap=plt.get_cmap('gray'))\n",
" plt.show()\n",
" if captions is not None:\n",
" print(6 * ' ' + (10 * ' ').join(str(captions[i]) for i in range(m)))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA6IAAACPCAYAAADgImbyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAHEtJREFUeJzt3XmQVNXZx/HniIAYREQIISKCggiR\nTUDB1wITwBVZJKKEPUYoUYSUUKASgzEIolLFIlEkMIKUaIVVI0EElKiEAgnmZXXAyJYRUEE2Iy96\n3z/o5T5Hpqd7uvvc2z3fT9UU9ze3u+/pnme659D93GM8zxMAAAAAAFw5J+gBAAAAAADKFiaiAAAA\nAACnmIgCAAAAAJxiIgoAAAAAcIqJKAAAAADAKSaiAAAAAACnmIgCAAAAAJxiIgoAAAAAcCqtiagx\n5hZjzA5jzE5jzOhMDQq5h1pAFLUAEeoAcdQCRKgDxFELiPE8r1RfIlJORHaJyOUiUkFEPhaRxiVc\nx+Mr574OZboWQnCf+MpCHVALZeYr488J1ELOfvH6wFdW6oBayNkvaoGvpGvB87y03hG9VkR2ep73\nqed5p0Rkvoh0TeP2EE67k7gMtZD/kqkDEWqhLOA5AVHUAkSoA8RRC4hK6u/GdCail4jIXl/eF/me\nYowZZIzZYIzZkMaxEG4l1gJ1UGZQCxDh9QFxPCdAhOcExFELiDk32wfwPG+GiMwQETHGeNk+HsKJ\nOkAUtYAoagEi1AHiqAVEUQtlQzrviO4XkUt9uXbkeyh7qAVEUQsQoQ4QRy1AhDpAHLWAmHQmoutF\npIExpp4xpoKI3CMiSzMzLOQYagFR1AJEqAPEUQsQoQ4QRy0gptQfzfU877Qx5kERWS5nzoA1y/O8\nLRkbGXIGtYAoagEi1AHiqAWIUAeIoxbgZyKnRXZzMD7jnYs+8jyvVSZvkDrISRmvAxFqIUdRC4ji\n9QEiPCcgjlpAVFK1kM5HcwEAAAAASBkTUQAAAACAU0xEAQAAAABOMREFAAAAADjFRBQAAAAA4BQT\nUQAAAACAU0xEAQAAAABOMREFAAAAADh1btADAPJVy5YtVX7wwQdV7tevn8pz5sxReerUqSpv3Lgx\ng6MDAABANk2ePFnlhx56KLa9efNmta9z584q7969O3sDCwneEQUAAAAAOMVEFAAAAADgFB/NTVK5\ncuVUvvDCC5O+rv2RzPPPP1/lhg0bqvzAAw+o/Oyzz6rcq1cvlf/73/+qPGHChNj2E088kfQ4kZ7m\nzZurvGLFCpWrVKmisud5Kvft21flLl26qHzxxRenO0TkiQ4dOqg8b948ldu3b6/yjh07sj4mZMeY\nMWNUtp/TzzlH/3/yjTfeqPJ7772XlXEByIwLLrhA5cqVK6t8++23q1yjRg2VJ02apPK3336bwdEh\nVXXr1lW5T58+Kn///fex7UaNGql9V111lcp8NBcAAAAAgAxjIgoAAAAAcIqJKAAAAADAqTLTI1qn\nTh2VK1SooPL111+v8g033KBy1apVVe7Ro0fGxrZv3z6Vp0yZonL37t1VPnbsmMoff/yxyvQEuXPt\ntdfGthcsWKD22X3Edk+o/XM8deqUynZPaJs2bVS2l3Oxr18WtGvXLrZtP16LFi1yPRxnWrdurfL6\n9esDGgkybcCAASqPGjVKZX9/0dnYzzMAgufvG7R/p9u2bavy1VdfndJt16pVS2X/8iBw79ChQyqv\nWbNGZfv8H2Ud74gCAAAAAJxiIgoAAAAAcIqJKAAAAADAqbztEbXXdFy1apXKqawDmml2j4+9Ttzx\n48dVttcILCoqUvnw4cMqs2Zg5thrvl5zzTUqv/LKK7Ftu0+jJIWFhSpPnDhR5fnz56v8wQcfqGzX\nzfjx41M6fj7wr5nYoEEDtS+fekTttSLr1aun8mWXXaayMSbrY0J22D/L8847L6CRIFXXXXedyv71\nA+21fX/2s58lvK0RI0ao/J///Edl+zwW/tciEZF169YlHiwyyl7/cfjw4Sr37t07tl2pUiW1z36+\n3rt3r8r2+STstSd79uyp8vTp01Xevn17ccNGFpw4cULlsrAWaDp4RxQAAAAA4BQTUQAAAACAU0xE\nAQAAAABO5W2P6J49e1T+8ssvVc5kj6jdi3HkyBGVf/7zn6tsr/c4d+7cjI0FmfXiiy+q3KtXr4zd\ntt1vWrlyZZXt9WD9/ZAiIk2bNs3YWHJVv379Yttr164NcCTZZfcf33fffSrb/WH0BOWOjh07qjx0\n6NCEl7d/tp07d1b5wIEDmRkYSnT33XerPHnyZJWrV68e27b7AN99912Va9SoofIzzzyT8Nj27dnX\nv+eeexJeH6mx/2Z8+umnVbZr4YILLkj6tu3zRdx8880qly9fXmX7OcBfZ2fLcKtq1aoqN2vWLKCR\n5AbeEQUAAAAAOMVEFAAAAADgFBNRAAAAAIBTedsj+tVXX6k8cuRIle2+mn/+858qT5kyJeHtb9q0\nKbbdqVMntc9eQ8heL2zYsGEJbxvBadmypcq33367yonWZ7R7Ot944w2Vn332WZXtdeHsGrTXh/3F\nL36R9FjKCnt9zXw1c+bMhPvtHiOEl73+4+zZs1Uu6fwFdu8ga9Rlz7nn6j+RWrVqpfJLL72ksr3u\n9Jo1a2LbTz75pNr3/vvvq1yxYkWVX3/9dZVvuummhGPdsGFDwv1IT/fu3VX+zW9+U+rb2rVrl8r2\n35D2OqL169cv9bHgnv08UKdOnaSv27p1a5XtfuB8fL4vG3/FAQAAAABCg4koAAAAAMCpEieixphZ\nxpiDxpjNvu9VM8asMMYURv69KLvDRBhQC4iiFiBCHSCOWkAUtQAR6gDJSaZHtEBEponIHN/3RovI\nSs/zJhhjRkfyqMwPL3MWL16s8qpVq1Q+duyYyva6P/fee6/K/n4/uyfUtmXLFpUHDRqUeLDhVSB5\nUAt+zZs3V3nFihUqV6lSRWXP81RetmxZbNteY7R9+/YqjxkzRmW77+/QoUMqf/zxxyp///33Ktv9\nq/a6pBs3bpQsKpAAasFeO7VmzZqZvPnQKqlv0K5bhwokz54Tsq1///4q//SnP014eXu9yTlz5pz9\ngsErkDyrhT59+qhcUq+2/XvoX1vy6NGjCa9rr0NZUk/ovn37VH755ZcTXt6xAsmzWrjrrrtSuvxn\nn32m8vr162Pbo0bpu233hNoaNWqU0rFDpEDyrA6SYZ//o6CgQOWxY8cWe11735EjR1SeNm1aOkML\npRLfEfU8b42IfGV9u6uIRJ/1XhaRbhkeF0KIWkAUtQAR6gBx1AKiqAWIUAdITmnPmlvT87yiyPbn\nIlLs2xLGmEEikrNvAaJESdUCdVAmUAsQ4fUBcTwnIIpagAivD7CkvXyL53meMcZLsH+GiMwQEUl0\nOeS+RLVAHZQt1AJEeH1AHM8JiKIWIMLrA84o7UT0gDGmlud5RcaYWiJyMJODcqGkfo2vv/464f77\n7rsvtv3aa6+pfXYvX57LqVq48sorVbbXl7V78b744guVi4qKVPb35Rw/flzt++tf/5owp6tSpUoq\nP/zwwyr37t07o8dLQtZr4bbbblPZfgzyhd37Wq9evYSX379/fzaHk6qcek7IturVq6v861//WmX7\n9cLuCfrjH/+YnYG5kVO1YK/1+eijj6psnyNg+vTpKtvnASjp7wy/xx57LOnLiog89NBDKtvnGAih\nnKoFm/9vPpEfnuvj7bffVnnnzp0qHzxY+rubZ+dCyOk6KA37eSVRj2hZVNrlW5aKSPSMC/1FZElm\nhoMcRC0gilqACHWAOGoBUdQCRKgDWJJZvuVVEVkrIg2NMfuMMfeKyAQR6WSMKRSRjpGMPEctIIpa\ngAh1gDhqAVHUAkSoAySnxI/mep7Xq5hdHTI8FoQctYAoagEi1AHiqAVEUQsQoQ6QnLRPVpSv7M9w\nt2zZUmX/GpEdO3ZU++x
"text/plain": [
"<matplotlib.figure.Figure at 0x7fdda922aad0>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" 5 0 4 1 9 2 1\n"
]
}
],
"source": [
"draw_examples(x_train[:7], captions=y_train)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"60000 przykładów uczących\n",
"10000 przykładów testowych\n"
]
}
],
"source": [
"num_classes = 10\n",
"\n",
"x_train = x_train.reshape(60000, 784) # 784 = 28 * 28\n",
"x_test = x_test.reshape(10000, 784)\n",
"x_train = x_train.astype('float32')\n",
"x_test = x_test.astype('float32')\n",
"x_train /= 255\n",
"x_test /= 255\n",
"print('{} przykładów uczących'.format(x_train.shape[0]))\n",
"print('{} przykładów testowych'.format(x_test.shape[0]))\n",
"\n",
"# przekonwertuj wektory klas na binarne macierze klas\n",
"y_train = keras.utils.to_categorical(y_train, num_classes)\n",
"y_test = keras.utils.to_categorical(y_test, num_classes)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"scrolled": true,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"dense_1 (Dense) (None, 512) 401920 \n",
"_________________________________________________________________\n",
"dropout_1 (Dropout) (None, 512) 0 \n",
"_________________________________________________________________\n",
"dense_2 (Dense) (None, 512) 262656 \n",
"_________________________________________________________________\n",
"dropout_2 (Dropout) (None, 512) 0 \n",
"_________________________________________________________________\n",
"dense_3 (Dense) (None, 10) 5130 \n",
"=================================================================\n",
"Total params: 669,706\n",
"Trainable params: 669,706\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n"
]
}
],
"source": [
"model = Sequential()\n",
"model.add(Dense(512, activation='relu', input_shape=(784,)))\n",
"model.add(Dropout(0.2))\n",
"model.add(Dense(512, activation='relu'))\n",
"model.add(Dropout(0.2))\n",
"model.add(Dense(num_classes, activation='softmax'))\n",
"model.summary()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"((60000, 784), (60000, 10))\n"
]
}
],
"source": [
"print(x_train.shape, y_train.shape)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train on 60000 samples, validate on 10000 samples\n",
"Epoch 1/5\n",
"60000/60000 [==============================] - 9s 153us/step - loss: 0.2489 - acc: 0.9224 - val_loss: 0.1005 - val_acc: 0.9706\n",
"Epoch 2/5\n",
"60000/60000 [==============================] - 9s 151us/step - loss: 0.1042 - acc: 0.9683 - val_loss: 0.0861 - val_acc: 0.9740\n",
"Epoch 3/5\n",
"60000/60000 [==============================] - 9s 153us/step - loss: 0.0742 - acc: 0.9782 - val_loss: 0.0733 - val_acc: 0.9796\n",
"Epoch 4/5\n",
"60000/60000 [==============================] - 9s 154us/step - loss: 0.0603 - acc: 0.9824 - val_loss: 0.0713 - val_acc: 0.9800\n",
"Epoch 5/5\n",
"60000/60000 [==============================] - 9s 157us/step - loss: 0.0512 - acc: 0.9848 - val_loss: 0.0749 - val_acc: 0.9795\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7fdda4f97110>"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])\n",
"\n",
"model.fit(x_train, y_train, batch_size=128, epochs=5, verbose=1,\n",
" validation_data=(x_test, y_test))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test loss: 0.074858742202\n",
"Test accuracy: 0.9795\n"
]
}
],
"source": [
"score = model.evaluate(x_test, y_test, verbose=0)\n",
"\n",
"print('Test loss: {}'.format(score[0]))\n",
"print('Test accuracy: {}'.format(score[1]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Warstwa _dropout_ to metoda regularyzacji, służy zapobieganiu nadmiernemu dopasowaniu sieci. Polega na tym, że część węzłów sieci jest usuwana w sposób losowy."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"dense_4 (Dense) (None, 512) 401920 \n",
"_________________________________________________________________\n",
"dense_5 (Dense) (None, 512) 262656 \n",
"_________________________________________________________________\n",
"dense_6 (Dense) (None, 10) 5130 \n",
"=================================================================\n",
"Total params: 669,706\n",
"Trainable params: 669,706\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n",
"Train on 60000 samples, validate on 10000 samples\n",
"Epoch 1/5\n",
"60000/60000 [==============================] - 8s 139us/step - loss: 0.2237 - acc: 0.9303 - val_loss: 0.0998 - val_acc: 0.9676\n",
"Epoch 2/5\n",
"60000/60000 [==============================] - 8s 136us/step - loss: 0.0818 - acc: 0.9748 - val_loss: 0.0788 - val_acc: 0.9770\n",
"Epoch 3/5\n",
"60000/60000 [==============================] - 8s 136us/step - loss: 0.0538 - acc: 0.9831 - val_loss: 0.1074 - val_acc: 0.9695\n",
"Epoch 4/5\n",
"60000/60000 [==============================] - 10s 161us/step - loss: 0.0397 - acc: 0.9879 - val_loss: 0.0871 - val_acc: 0.9763\n",
"Epoch 5/5\n",
"60000/60000 [==============================] - 12s 195us/step - loss: 0.0299 - acc: 0.9910 - val_loss: 0.0753 - val_acc: 0.9812\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7fdda3dcad50>"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Bez warstw Dropout\n",
"\n",
"num_classes = 10\n",
"\n",
"(x_train, y_train), (x_test, y_test) = mnist.load_data()\n",
"\n",
"x_train = x_train.reshape(60000, 784) # 784 = 28 * 28\n",
"x_test = x_test.reshape(10000, 784)\n",
"x_train = x_train.astype('float32')\n",
"x_test = x_test.astype('float32')\n",
"x_train /= 255\n",
"x_test /= 255\n",
"\n",
"y_train = keras.utils.to_categorical(y_train, num_classes)\n",
"y_test = keras.utils.to_categorical(y_test, num_classes)\n",
"\n",
"model_no_dropout = Sequential()\n",
"model_no_dropout.add(Dense(512, activation='relu', input_shape=(784,)))\n",
"model_no_dropout.add(Dense(512, activation='relu'))\n",
"model_no_dropout.add(Dense(num_classes, activation='softmax'))\n",
"model_no_dropout.summary()\n",
"\n",
"model_no_dropout.compile(loss='categorical_crossentropy',\n",
" optimizer=RMSprop(),\n",
" metrics=['accuracy'])\n",
"\n",
"model_no_dropout.fit(x_train, y_train,\n",
" batch_size=128,\n",
" epochs=5,\n",
" verbose=1,\n",
" validation_data=(x_test, y_test))"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test loss (no dropout): 0.0753162465898\n",
"Test accuracy (no dropout): 0.9812\n"
]
}
],
"source": [
"# Bez warstw Dropout\n",
"\n",
"score = model_no_dropout.evaluate(x_test, y_test, verbose=0)\n",
"\n",
"print('Test loss (no dropout): {}'.format(score[0]))\n",
"print('Test accuracy (no dropout): {}'.format(score[1]))"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"dense_7 (Dense) (None, 2500) 1962500 \n",
"_________________________________________________________________\n",
"dense_8 (Dense) (None, 2000) 5002000 \n",
"_________________________________________________________________\n",
"dense_9 (Dense) (None, 1500) 3001500 \n",
"_________________________________________________________________\n",
"dense_10 (Dense) (None, 1000) 1501000 \n",
"_________________________________________________________________\n",
"dense_11 (Dense) (None, 500) 500500 \n",
"_________________________________________________________________\n",
"dense_12 (Dense) (None, 10) 5010 \n",
"=================================================================\n",
"Total params: 11,972,510\n",
"Trainable params: 11,972,510\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n",
"Train on 60000 samples, validate on 10000 samples\n",
"Epoch 1/10\n",
"60000/60000 [==============================] - 145s 2ms/step - loss: 1.4242 - acc: 0.5348 - val_loss: 0.4426 - val_acc: 0.8638\n",
"Epoch 2/10\n",
"60000/60000 [==============================] - 140s 2ms/step - loss: 0.3245 - acc: 0.9074 - val_loss: 0.2231 - val_acc: 0.9360\n",
"Epoch 3/10\n",
"60000/60000 [==============================] - 137s 2ms/step - loss: 0.1993 - acc: 0.9420 - val_loss: 0.1694 - val_acc: 0.9485\n",
"Epoch 4/10\n",
"60000/60000 [==============================] - 136s 2ms/step - loss: 0.1471 - acc: 0.9571 - val_loss: 0.1986 - val_acc: 0.9381\n",
"Epoch 5/10\n",
"60000/60000 [==============================] - 132s 2ms/step - loss: 0.1189 - acc: 0.9650 - val_loss: 0.1208 - val_acc: 0.9658\n",
"Epoch 6/10\n",
"60000/60000 [==============================] - 131s 2ms/step - loss: 0.0983 - acc: 0.9711 - val_loss: 0.1260 - val_acc: 0.9637\n",
"Epoch 7/10\n",
"60000/60000 [==============================] - 129s 2ms/step - loss: 0.0818 - acc: 0.9753 - val_loss: 0.0984 - val_acc: 0.9727\n",
"Epoch 8/10\n",
"60000/60000 [==============================] - 129s 2ms/step - loss: 0.0710 - acc: 0.9784 - val_loss: 0.1406 - val_acc: 0.9597\n",
"Epoch 9/10\n",
"60000/60000 [==============================] - 129s 2ms/step - loss: 0.0611 - acc: 0.9811 - val_loss: 0.0987 - val_acc: 0.9727\n",
"Epoch 10/10\n",
"60000/60000 [==============================] - 136s 2ms/step - loss: 0.0533 - acc: 0.9837 - val_loss: 0.1070 - val_acc: 0.9718\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7fdd95c86610>"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Więcej warstw, inna funkcja aktywacji\n",
"\n",
"num_classes = 10\n",
"\n",
"(x_train, y_train), (x_test, y_test) = mnist.load_data()\n",
"\n",
"x_train = x_train.reshape(60000, 784) # 784 = 28 * 28\n",
"x_test = x_test.reshape(10000, 784)\n",
"x_train = x_train.astype('float32')\n",
"x_test = x_test.astype('float32')\n",
"x_train /= 255\n",
"x_test /= 255\n",
"\n",
"y_train = keras.utils.to_categorical(y_train, num_classes)\n",
"y_test = keras.utils.to_categorical(y_test, num_classes)\n",
"\n",
"model3 = Sequential()\n",
"model3.add(Dense(2500, activation='tanh', input_shape=(784,)))\n",
"model3.add(Dense(2000, activation='tanh'))\n",
"model3.add(Dense(1500, activation='tanh'))\n",
"model3.add(Dense(1000, activation='tanh'))\n",
"model3.add(Dense(500, activation='tanh'))\n",
"model3.add(Dense(num_classes, activation='softmax'))\n",
"model3.summary()\n",
"\n",
"model3.compile(loss='categorical_crossentropy',\n",
" optimizer=RMSprop(),\n",
" metrics=['accuracy'])\n",
"\n",
"model3.fit(x_train, y_train,\n",
" batch_size=128,\n",
" epochs=10,\n",
" verbose=1,\n",
" validation_data=(x_test, y_test))"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test loss: 0.107020105763\n",
"Test accuracy: 0.9718\n"
]
}
],
"source": [
"# Więcej warstw, inna funkcja aktywacji\n",
"\n",
"score = model3.evaluate(x_test, y_test, verbose=0)\n",
"\n",
"print('Test loss: {}'.format(score[0]))\n",
"print('Test accuracy: {}'.format(score[1]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Przykład: 4-pikselowy aparat fotograficzny\n",
"\n",
"https://www.youtube.com/watch?v=ILsA4nyG7I0"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"def generate_example(description):\n",
" variant = random.choice([1, -1])\n",
" if description == 's': # solid\n",
" return (np.array([[ 1.0, 1.0], [ 1.0, 1.0]]) if variant == 1 else\n",
" np.array([[-1.0, -1.0], [-1.0, -1.0]]))\n",
" elif description == 'v': # vertical\n",
" return (np.array([[ 1.0, -1.0], [ 1.0, -1.0]]) if variant == 1 else\n",
" np.array([[-1.0, 1.0], [-1.0, 1.0]]))\n",
" elif description == 'd': # diagonal\n",
" return (np.array([[ 1.0, -1.0], [-1.0, 1.0]]) if variant == 1 else\n",
" np.array([[-1.0, 1.0], [ 1.0, -1.0]]))\n",
" elif description == 'h': # horizontal\n",
" return (np.array([[ 1.0, 1.0], [-1.0, -1.0]]) if variant == 1 else\n",
" np.array([[-1.0, -1.0], [ 1.0, 1.0]]))\n",
" else:\n",
" return np.array([[random.uniform(-1, 1), random.uniform(-1, 1)],\n",
" [random.uniform(-1, 1), random.uniform(-1, 1)]])"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"num_classes = 4\n",
"\n",
"trainset_size = 4000\n",
"testset_size = 1000\n",
"\n",
"y4_train = np.array([random.choice(['s', 'v', 'd', 'h']) for i in range(trainset_size)])\n",
"x4_train = np.array([generate_example(desc) for desc in y4_train])\n",
"\n",
"y4_test = np.array([random.choice(['s', 'v', 'd', 'h']) for i in range(testset_size)])\n",
"x4_test = np.array([generate_example(desc) for desc in y4_test])"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA60AAACQCAYAAADjqY0xAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAADlVJREFUeJzt3V+opPddx/HP192mhUi1bZY2bLJN\nxKQhioI7if9AQkVIi2wEe5F6YSuVBSEIXhkQLPRKvRHEYgk1JPWirXihqyiltUi9aE3OSmPTlk3X\nYpvdRtptpJKq3ezy8+LMs3O6Obs7J+eZmd+Zeb3gYc/MPJnnmZN3fs9+d2ZPqrUWAAAA6NEPrPoE\nAAAA4FoMrQAAAHTL0AoAAEC3DK0AAAB0y9AKAABAtwytAAAAdGtfQ2tVvbGqPllVX5n++oZr7He5\nqj4/3U7t55j0SQsMtECiA2a0wEALJDrg1an9/H9aq+qPkrzYWvuDqno0yRtaa7+7y34vtdZ+cB/n\nSee0wEALJDpgRgsMtECiA16d/Q6tZ5I80Fp7oapuTfJPrbW37bKf6NacFhhogUQHzGiBgRZIdMCr\ns9+/0/rm1toL06//M8mbr7Hf66pqq6o+V1W/ss9j0ictMNACiQ6Y0QIDLZDogFfh8I12qKpPJXnL\nLg/93s4brbVWVdd62/atrbXzVfUjST5dVV9orf37Lsc6meTk9ObxG50bfamqb7XWjmgBLezN8eMH\n82U999xzefnll19x/9GjR3Po0KFMJpN2+vTpC621IzrYbItaE26++ebj99xzzwLPnL260brg+kCS\nTP/duz7M6aD+PmEew+8Tbrhja+1Vb0nOJLl1+vWtSc7M8c88keRdc+zXbAdue0YLNi3sfVtHd999\nd/vGN77RkmzpwJYFrQnHjx9feMuM5+67715YCx00btv75vow57bOkmy1OebO/X48+FSS90y/fk+S\nv7l6h6p6Q1W9dvr1LUl+PsmX9nlc+vSm6a9aQAsb7sSJE3nyySeHmzrAmkBOnDiRaIHvpwPmM89k\ne60t2wvPPyb5SpJPJXnj9P5Jkg9Pv/65JF9I8sz01/fN+dwr/1MN2563/9aCTQt739bRhQsX2tvf\n/vaW5P90YMuC1gTvtB4sFy5cWFgLHTRu2/vm+jDnts4y5zut+/rpwYt0nc+306/TrbXJ2E+qhQNJ\nC3vQ6zo8hqoavYV17WDNLWRNmEwmbWtra+ynZYEWsSZMn9e6cPC4PszJ7xP2/9ODAQAAYGEMrQAA\nAHTL0AoAAEC3DK0AAAB0y9AKAABAtwytAAAAdMvQCgAAQLcMrQAAAHTL0AoAAEC3DK0AAAB0y9AK\nAABAtwytAAAAdMvQCgAAQLcMrQAAAHTL0AoAAEC3DK0AAAB0y9AKAABAtwytAAAAdGuUobWqHqyq\nM1V1tqoe3eXx11bVx6eP/0tV3THGcemPFhhoganX64DEmsCMFphyfWBu+x5aq+pQkg8meUeSe5O8\nu6ruvWq39yX5r9bajyb54yR/uN/j0i0tMNDChrt8+XKSHIsO2GZNYKAFEtcH9mCMd1rvT3K2tfbV\n1trFJB9L8tBV+zyU5Mnp13+V5BerqkY4Nn25OVpgmxbIU089lSTf0wGxJjCjBQauD8xtjKH1aJLn\nd9w+N71v131aa5eSfCfJm0Y4Nn25KVpgmxbI+fPnk+Tijrt0sLmsCQy0wMD1gbkdXvUJ7FRVJ5Oc\nXPV5sHpaYKAFEh0ws7OFY8eOrfhsWCXrAokONsUY77SeT3L7jtu3Te/bdZ+qOpzkh5J8++onaq09\n1lqbtNYmI5wXy3cxWmCbFsjRo0eT7XdVBjrYXAtZE44cObKg02WBXB8YuD4wtzGG1qeT3FVVd1bV\nTUkeTnLqqn1OJXnP9Ot3Jfl0a62NcGz68t1ogW1aIPfdd1+SvE4HxJrAjBYYuD4wt31/PLi1dqmq\nHknyiSSHkjzeWvtiVX0gyVZr7VSSP0/yF1V1NsmL2Q6T9aQFBlrYcIcPH06Sr0cHbLMmMNACiesD\ne1C9/oFFVfV5YlzP6UV8NEMLB5IW9qDXdXgMVTV6C+vawZpbyJowmUza1tbW2E/LAi1iTZg+r3Xh\n4HF9mJPfJ4zz8WAAAABYCEMrAAAA3TK0AgAA0C1DKwAAAN0ytAIAANAtQysAAADdMrQCAADQLUMr\nAAAA3TK0AgAA0C1DKwAAAN0ytAIAANAtQysAAADdMrQCAADQLUMrAAAA3TK0AgAA0C1DKwAAAN0y\ntAIAANAtQysAAADdMrQCAADQrVGG1qp6sKrOVNXZqnp0l8ffW1XfqqrPT7ffHOO49EcLDLTA1Ot1\nQGJN4AprAgMtMLfD+32CqjqU5INJfinJuSRPV9Wp1tqXrtr14621R/Z7PLqnBQZa2HCXL19OkmNJ\n7o0OsCZsPGsCV9ECcxvjndb7k5xtrX21tXYxyceSPDTC83Lw3BwtsE0L5KmnnkqS7+mAWBOINYFX\n0AJz2/c7rUmOJnl+x+1zSX56l/1+tap+IclzSX6ntfb81TtU1ckkJ0c4J1bjpmiBbQtp4dixY/na\n1762gNNdrapa9Sks0sUdX1sTNtfCrg9r/t/POhplTUisC2vA9YG5LesHMf1tkjtaaz+R5JNJntxt\np9baY621SWttsqTzYvm0wGDPLRw5cmSpJ8hSWBMYaIFkzg4SLWwAawJXjDG0nk9y+47bt03vu6K1\n9u3W2vemNz+c5PgIx6U/F6MFtmmBwU07vtbB5rImMLAmMNACcxtjaH06yV1VdWdV3ZTk4SSndu5Q\nVbfuuHkiyZdHOC79+W60wDYtMHidDog1gRlrAgMtMLd9/53W1tqlqnokySeSHEryeGvti1X1gSRb\nrbVTSX67qk4kuZTkxSTv3e9x6ZYWGGiBJPl6dMA2awKJNYEZLTC3aq2t+hx2VVV9nhjXc3oRf59A\nCwfSQlqYTCZta2tr7KdduTX/QTKjt2BNOJBcHxhogYHrw5x6ndfGUFVzdbCsH8QEAAAAe2ZoBQAA\noFuGVgAAALplaAUAAKBbhlYAAAC6ZWgFAACgW4ZWAAAAumVoBQAAoFuGVgAAALplaAUAAKBbhlYA\nAAC6ZWgFAACgW4ZWAAAAumVoBQAAoFuGVgAAALplaAUAAKBbhlYAAAC6ZWgFAACgW6MMrVX1eFV9\ns6qevcbjVVV/UlVnq+rfquqnxjgu3blDB0xpgYEWSHTAjBYYaIG5jfVO6xNJHrzO4+9Ictd0O5nk\nz0Y6Ln25EB2wTQsMtECiA2a0wEALzG2UobW19pkkL15nl4eSfKRt+1ySH66qW8c4Nl15KTpgmxYY\naIFEB8xogYEWmNuy/k7r0STP77h9bnofm0UHDLTAQAskOmBGCwy0wBWHV30CO1XVyWy//c+G0wKD\nnS0cO3ZsxWfDqlgTGGiBgRZIdLAplvVO6/kkt++4fdv0vu/TWnustTZprU2WdF4s11wdJFrYAK+q\nhSNHjizl5Fgq1wcS1wdmtMDA9YErljW0nkry69OfAvYzSb7TWnthScemHzpgoAUGWiDRATNaYKAF\nrhjl48FV9dEkDyS5parOJXl/ktckSWvtQ0n+Psk7k5xN8j9JfmOM49KdO5N8NjpAC8xogUQHzGiB\ngRaYW7XWVn0Ou6qqPk+M6zm9iI9maOFAWkgLk8mkbW1tjf20K1dVqz6FRRq9BWvCgeT6wEALDFwf\n5tTrvDaGqpqrg2V9PBgAAAD2zNAKAABAtwytAAAAdMvQCgAAQLcMrQAAAHTL0AoAAEC3DK0AAAB0\ny9AKAABAtwytAAAAdMvQCgAAQLcMrQAAAHTL0AoAAEC3DK0AAAB0y9AKAABAtwytAAAAdMvQCgAA\nQLcMrQAAAHTL0AoAAEC3Rhlaq+rxqvpmVT17jccfqKrvVNXnp9vvj3FcunOHDpjSAgMtkOiAGS0w\n+EkdMK/DIz3PE0n+NMl
"text/plain": [
"<matplotlib.figure.Figure at 0x7f4d3ffc2ed0>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" s s d s h s v\n"
]
}
],
"source": [
"draw_examples(x4_train[:7], captions=y4_train)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"x4_train = x4_train.reshape(trainset_size, 4)\n",
"x4_test = x4_test.reshape(testset_size, 4)\n",
"x4_train = x4_train.astype('float32')\n",
"x4_test = x4_test.astype('float32')\n",
"\n",
"y4_train = np.array([{'s': 0, 'v': 1, 'd': 2, 'h': 3}[desc] for desc in y4_train])\n",
"y4_test = np.array([{'s': 0, 'v': 1, 'd': 2, 'h': 3}[desc] for desc in y4_test])\n",
"\n",
"y4_train = keras.utils.to_categorical(y4_train, num_classes)\n",
"y4_test = keras.utils.to_categorical(y4_test, num_classes)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"dense_16 (Dense) (None, 4) 20 \n",
"_________________________________________________________________\n",
"dense_17 (Dense) (None, 4) 20 \n",
"_________________________________________________________________\n",
"dense_18 (Dense) (None, 8) 40 \n",
"_________________________________________________________________\n",
"dense_19 (Dense) (None, 4) 36 \n",
"=================================================================\n",
"Total params: 116\n",
"Trainable params: 116\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n"
]
}
],
"source": [
"model4 = Sequential()\n",
"model4.add(Dense(4, activation='tanh', input_shape=(4,)))\n",
"model4.add(Dense(4, activation='tanh'))\n",
"model4.add(Dense(8, activation='relu'))\n",
"model4.add(Dense(num_classes, activation='softmax'))\n",
"model4.summary()"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"model4.layers[0].set_weights(\n",
" [np.array([[ 1.0, 0.0, 1.0, 0.0],\n",
" [ 0.0, 1.0, 0.0, 1.0],\n",
" [ 1.0, 0.0, -1.0, 0.0],\n",
" [ 0.0, 1.0, 0.0, -1.0]],\n",
" dtype=np.float32), np.array([0., 0., 0., 0.], dtype=np.float32)])\n",
"model4.layers[1].set_weights(\n",
" [np.array([[ 1.0, -1.0, 0.0, 0.0],\n",
" [ 1.0, 1.0, 0.0, 0.0],\n",
" [ 0.0, 0.0, 1.0, -1.0],\n",
" [ 0.0, 0.0, -1.0, -1.0]],\n",
" dtype=np.float32), np.array([0., 0., 0., 0.], dtype=np.float32)])\n",
"model4.layers[2].set_weights(\n",
" [np.array([[ 1.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],\n",
" [ 0.0, 0.0, 1.0, -1.0, 0.0, 0.0, 0.0, 0.0],\n",
" [ 0.0, 0.0, 0.0, 0.0, 1.0, -1.0, 0.0, 0.0],\n",
" [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, -1.0]],\n",
" dtype=np.float32), np.array([0., 0., 0., 0., 0., 0., 0., 0.], dtype=np.float32)])"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"model4.layers[3].set_weights(\n",
" [np.array([[ 1.0, 0.0, 0.0, 0.0],\n",
" [ 1.0, 0.0, 0.0, 0.0],\n",
" [ 0.0, 1.0, 0.0, 0.0],\n",
" [ 0.0, 1.0, 0.0, 0.0],\n",
" [ 0.0, 0.0, 1.0, 0.0],\n",
" [ 0.0, 0.0, 1.0, 0.0],\n",
" [ 0.0, 0.0, 0.0, 1.0],\n",
" [ 0.0, 0.0, 0.0, 1.0]],\n",
" dtype=np.float32), np.array([0., 0., 0., 0.], dtype=np.float32)])\n",
"\n",
"model4.compile(loss='categorical_crossentropy',\n",
" optimizer=Adagrad(),\n",
" metrics=['accuracy'])"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[array([[ 1., 0., 1., 0.],\n",
" [ 0., 1., 0., 1.],\n",
" [ 1., 0., -1., 0.],\n",
" [ 0., 1., 0., -1.]], dtype=float32), array([ 0., 0., 0., 0.], dtype=float32)]\n",
"[array([[ 1., -1., 0., 0.],\n",
" [ 1., 1., 0., 0.],\n",
" [ 0., 0., 1., -1.],\n",
" [ 0., 0., -1., -1.]], dtype=float32), array([ 0., 0., 0., 0.], dtype=float32)]\n",
"[array([[ 1., -1., 0., 0., 0., 0., 0., 0.],\n",
" [ 0., 0., 1., -1., 0., 0., 0., 0.],\n",
" [ 0., 0., 0., 0., 1., -1., 0., 0.],\n",
" [ 0., 0., 0., 0., 0., 0., 1., -1.]], dtype=float32), array([ 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)]\n",
"[array([[ 1., 0., 0., 0.],\n",
" [ 1., 0., 0., 0.],\n",
" [ 0., 1., 0., 0.],\n",
" [ 0., 1., 0., 0.],\n",
" [ 0., 0., 1., 0.],\n",
" [ 0., 0., 1., 0.],\n",
" [ 0., 0., 0., 1.],\n",
" [ 0., 0., 0., 1.]], dtype=float32), array([ 0., 0., 0., 0.], dtype=float32)]\n"
]
}
],
"source": [
"for layer in model4.layers:\n",
" print(layer.get_weights())"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 0.17831734, 0.17831734, 0.17831734, 0.46504799]], dtype=float32)"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model4.predict([np.array([[1.0, 1.0], [-1.0, -1.0]]).reshape(1, 4)])"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test loss: 0.765614629269\n",
"Test accuracy: 1.0\n"
]
}
],
"source": [
"score = model4.evaluate(x4_test, y4_test, verbose=0)\n",
"\n",
"print('Test loss: {}'.format(score[0]))\n",
"print('Test accuracy: {}'.format(score[1]))"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"dense_20 (Dense) (None, 4) 20 \n",
"_________________________________________________________________\n",
"dense_21 (Dense) (None, 4) 20 \n",
"_________________________________________________________________\n",
"dense_22 (Dense) (None, 8) 40 \n",
"_________________________________________________________________\n",
"dense_23 (Dense) (None, 4) 36 \n",
"=================================================================\n",
"Total params: 116\n",
"Trainable params: 116\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n"
]
}
],
"source": [
"model5 = Sequential()\n",
"model5.add(Dense(4, activation='tanh', input_shape=(4,)))\n",
"model5.add(Dense(4, activation='tanh'))\n",
"model5.add(Dense(8, activation='relu'))\n",
"model5.add(Dense(num_classes, activation='softmax'))\n",
"model5.compile(loss='categorical_crossentropy',\n",
" optimizer=RMSprop(),\n",
" metrics=['accuracy'])\n",
"model5.summary()"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"scrolled": true,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train on 4000 samples, validate on 1000 samples\n",
"Epoch 1/8\n",
"4000/4000 [==============================] - 0s - loss: 1.1352 - acc: 0.5507 - val_loss: 1.0160 - val_acc: 0.7330\n",
"Epoch 2/8\n",
"4000/4000 [==============================] - 0s - loss: 0.8918 - acc: 0.8722 - val_loss: 0.8094 - val_acc: 0.8580\n",
"Epoch 3/8\n",
"4000/4000 [==============================] - 0s - loss: 0.6966 - acc: 0.8810 - val_loss: 0.6283 - val_acc: 0.8580\n",
"Epoch 4/8\n",
"4000/4000 [==============================] - 0s - loss: 0.5284 - acc: 0.8810 - val_loss: 0.4697 - val_acc: 0.8580\n",
"Epoch 5/8\n",
"4000/4000 [==============================] - 0s - loss: 0.3797 - acc: 0.9022 - val_loss: 0.3312 - val_acc: 1.0000\n",
"Epoch 6/8\n",
"4000/4000 [==============================] - 0s - loss: 0.2555 - acc: 1.0000 - val_loss: 0.2166 - val_acc: 1.0000\n",
"Epoch 7/8\n",
"4000/4000 [==============================] - 0s - loss: 0.1612 - acc: 1.0000 - val_loss: 0.1318 - val_acc: 1.0000\n",
"Epoch 8/8\n",
"4000/4000 [==============================] - 0s - loss: 0.0939 - acc: 1.0000 - val_loss: 0.0732 - val_acc: 1.0000\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7f4d34067510>"
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model5.fit(x4_train, y4_train, epochs=8, validation_data=(x4_test, y4_test))"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 0.00708295, 0.00192736, 0.02899081, 0.96199888]], dtype=float32)"
]
},
"execution_count": 45,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model5.predict([np.array([[1.0, 1.0], [-1.0, -1.0]]).reshape(1, 4)])"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test loss: 0.0731911802292\n",
"Test accuracy: 1.0\n"
]
}
],
"source": [
"score = model5.evaluate(x4_test, y4_test, verbose=0)\n",
"\n",
"print('Test loss: {}'.format(score[0]))\n",
"print('Test accuracy: {}'.format(score[1]))"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"import contextlib\n",
"\n",
"@contextlib.contextmanager\n",
"def printoptions(*args, **kwargs):\n",
" original = np.get_printoptions()\n",
" np.set_printoptions(*args, **kwargs)\n",
" try:\n",
" yield\n",
" finally: \n",
" np.set_printoptions(**original)"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"scrolled": true,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[array([[-0.2, -0.5, 0.8, 1. ],\n",
" [-0.9, 0.1, -0.8, 0.2],\n",
" [-0.2, 0.4, 0.1, -0.4],\n",
" [-0.8, 0.8, 1. , 0.3]], dtype=float32), array([ 0. , -0. , 0.1, -0.1], dtype=float32)]\n",
"[array([[-0.4, 0.9, -1.3, 1.7],\n",
" [-0.4, -0.7, 0.3, -0.3],\n",
" [ 0.8, -0.9, -1.1, -0.2],\n",
" [ 1.3, 0.5, 0.4, -0.2]], dtype=float32), array([-0. , -0. , 0.2, 0. ], dtype=float32)]\n",
"[array([[-1.6, 0.3, 0.3, -0.3, -1.1, 1.2, 0.7, -1. ],\n",
" [ 0.4, 1.3, -0.9, 0.8, -0.4, -0.7, -1.2, -1. ],\n",
" [ 0.6, 1. , 0.9, -1. , -1.1, -0.2, -0.4, -0.3],\n",
" [ 1.1, 0.1, -0.9, 1.3, -0.3, -0.2, 0.2, -0.4]], dtype=float32), array([-0. , 0.2, -0.1, 0. , -0.1, -0. , -0.1, 0.1], dtype=float32)]\n",
"[array([[ 0.6, -1.5, 1.3, -1.4],\n",
" [-0.4, -1.6, -0.3, 1.2],\n",
" [ 1.2, 1.1, -0.3, -1.5],\n",
" [ 0.6, 1.4, -1.5, -1.2],\n",
" [ 0.2, -1.3, -0.9, 0.8],\n",
" [ 0.6, -1.5, 0.8, -1. ],\n",
" [ 0.4, -1.3, 0.4, 0.3],\n",
" [-1.3, 0.5, -0.9, 0.8]], dtype=float32), array([-0.8, 0.7, 0.4, 0.1], dtype=float32)]\n"
]
}
],
"source": [
"with printoptions(precision=1, suppress=True):\n",
" for layer in model5.layers:\n",
" print(layer.get_weights())"
]
2021-05-05 14:43:56 +02:00
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.3"
},
"livereveal": {
"start_slideshow_at": "selected",
"theme": "white"
}
},
"nbformat": 4,
"nbformat_minor": 4
}