uczenie-maszynowe/wyk/13_CNN.ipynb

779 lines
37 KiB
Plaintext
Raw Normal View History

2023-01-23 15:42:40 +01:00
{
"cells": [
{
"cell_type": "markdown",
2023-01-27 09:30:01 +01:00
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
2023-01-23 15:42:40 +01:00
"source": [
2024-01-25 15:30:15 +01:00
"### Uczenie maszynowe\n",
2023-01-23 15:42:40 +01:00
"# 13. Splotowe sieci neuronowe"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
2023-06-01 10:31:04 +02:00
"slide_type": "subslide"
}
},
"source": [
"Splotowe sieci neuronowe, inaczej konwolucyjne sieci neuronowe (*convolutional neural networks*, CNN, ConvNet)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
2023-01-23 15:42:40 +01:00
}
},
"source": [
"Konwolucyjne sieci neuronowe wykorzystuje się do:\n",
"\n",
"* rozpoznawania obrazu\n",
"* analizy wideo\n",
"* innych zagadnień o podobnej strukturze"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Innymi słowy, CNN przydają się, gdy mamy bardzo dużo danych wejściowych, w których istotne jest ich sąsiedztwo."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Warstwy konwolucyjne"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
2023-01-27 09:30:01 +01:00
"Dla uproszczenia przyjmijmy, że mamy dane w postaci jednowymiarowej np. chcemy stwierdzić, czy na danym nagraniu obecny jest głos człowieka."
2023-01-23 15:42:40 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Nasze nagranie możemy reprezentować jako ciąg $n$ próbek dźwiękowych:\n",
"$$(x_0, x_1, \\ldots, x_n)$$\n",
"(możemy traktować je jak jednowymiarowe „piksele”)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Najprostsza metoda „zwykła” jednowarstwowa sieć neuronowa (każdy z każdym) nie poradzi sobie zbyt dobrze w tym przypadku:\n",
"\n",
"* dużo danych wejściowych\n",
"* nie wykrywa własności „lokalnych” wejścia"
]
},
2024-01-25 15:30:15 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-F.png\"/>"
]
},
2023-01-23 15:42:40 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Chcielibyśmy wykrywać pewne lokalne „wzory” w danych wejściowych.\n",
"\n",
"W tym celu tworzymy mniejszą sieć neuronową (mniej neuronów wejściowych) i _kopiujemy_ ją tak, żeby każda jej kopia działała na pewnym fragmencie wejścia (fragmenty mogą nachodzić na siebie)."
]
},
2024-01-25 15:30:15 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2.png\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv3.png\"/>"
]
},
2023-01-23 15:42:40 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Warstwę sieci A nazywamy **warstwą konwolucyjną** (konwolucja = splot).\n",
"\n",
"Warstw konwolucyjnych może być więcej niż jedna."
]
},
2024-01-25 15:30:15 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" width=\"60%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2Conv2.png\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" width=\"50%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2.png\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"<img style=\"margin: auto\" width=\"50%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Conv2.png\"/>"
]
},
2023-01-23 15:42:40 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Tak definiujemy formalnie funckję splotu dla 2 wymiarów:\n",
"\n",
"$$\n",
"\\left[\\begin{array}{ccc}\n",
"a & b & c\\\\\n",
"d & e & f\\\\\n",
"g & h & i\\\\\n",
"\\end{array}\\right]\n",
"*\n",
"\\left[\\begin{array}{ccc}\n",
"1 & 2 & 3\\\\\n",
"4 & 5 & 6\\\\\n",
"7 & 8 & 9\\\\\n",
"\\end{array}\\right] \n",
"=\\\\\n",
"(1 \\cdot a)+(2 \\cdot b)+(3 \\cdot c)+(4 \\cdot d)+(5 \\cdot e)\\\\+(6 \\cdot f)+(7 \\cdot g)+(8 \\cdot h)+(9 \\cdot i)\n",
"$$\n",
"\n",
"Więcej: https://en.wikipedia.org/wiki/Kernel_(image_processing)"
]
},
2024-01-25 15:30:15 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Ilustracja działania funkcji splotu:\n",
"\n",
"<img style=\"margin: auto\" height=\"80%\" src=\"https://devblogs.nvidia.com/wp-content/uploads/2015/11/Convolution_schematic.gif\"/>"
]
},
2023-01-23 15:42:40 +01:00
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Jednostka warstwy konwolucyjnej może się składać z jednej lub kilku warstw neuronów."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Jeden neuron może odpowiadać np. za wykrywanie pionowych krawędzi, drugi poziomych, a jeszcze inny np. krzyżujących się linii."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### _Pooling_"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Obrazy składają się na ogół z milionów pikseli. Oznacza to, że nawet po zastosowaniu kilku warstw konwolucyjnych mielibyśmy sporo parametrów do wytrenowania.\n",
"\n",
"Żeby zredukować liczbę parametrów, a dzięki temu uprościć obliczenia, stosuje się warstwy ***pooling***.\n",
"\n",
2023-01-23 16:01:45 +01:00
"*Pooling* to rodzaj próbkowania. Najpopularniejszą jego odmianą jest *max-pooling*, czyli wybieranie najwyższej wartości spośród kilku sąsiadujących pikseli (rys. 13.1)."
2023-01-23 15:42:40 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2023-01-23 16:01:45 +01:00
"![Rys. 13.1. Pooling](Max_pooling.png \"Rys. 13.1. Pooling\")\n",
2023-01-23 15:42:40 +01:00
"\n",
2023-01-23 16:01:45 +01:00
"Rys. 13.1. - źródło: [Aphex34](https://commons.wikimedia.org/wiki/File:Max_pooling.png), [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0), Wikimedia Commons"
2023-01-23 15:42:40 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
2023-01-23 16:01:45 +01:00
"Warstwy _pooling_ i konwolucyjne można przeplatać ze sobą (rys. 13.2)."
2023-01-23 15:42:40 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
2023-01-23 16:01:45 +01:00
"![Rys. 13.2. CNN](Typical_cnn.png \"Rys. 13.2. CNN\")\n",
2023-01-23 15:42:40 +01:00
"\n",
2023-01-23 16:01:45 +01:00
"Rys. 13.2. - źródło: [Aphex34](https://commons.wikimedia.org/wiki/File:Typical_cnn.png), [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0), Wikimedia Commons"
2023-01-23 15:42:40 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"_Pooling_ idea: nie jest istotne, w którym *dokładnie* miejscu na obrazku dana cecha (krawędź, oko, itp.) się znajduje, wystarczy przybliżona lokalizacja."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Do sieci konwolucujnych możemy dokładać też warstwy ReLU."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"https://www.youtube.com/watch?v=FmpDIaiMIeA"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"source": [
"Zobacz też: https://colah.github.io/posts/2014-07-Conv-Nets-Modular/"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Przykład: MNIST"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 2,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"\n",
"import math\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import random\n",
"\n",
"from IPython.display import YouTubeVideo"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 3,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
2023-01-27 13:30:25 +01:00
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-01-27 12:50:47.601029: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n",
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2023-01-27 12:50:48.662241: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n",
"2023-01-27 12:50:48.662268: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n",
"2023-01-27 12:50:51.653864: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\n",
"2023-01-27 12:50:51.654326: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\n",
"2023-01-27 12:50:51.654341: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n"
]
}
],
2023-01-23 15:42:40 +01:00
"source": [
"import keras\n",
"from keras.datasets import mnist\n",
"\n",
"from keras.models import Sequential\n",
"from keras.layers import Dense, Dropout, Flatten\n",
"from keras.layers import Conv2D, MaxPooling2D\n",
"\n",
"# załaduj dane i podziel je na zbiory uczący i testowy\n",
"(x_train, y_train), (x_test, y_test) = mnist.load_data()"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 4,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"def draw_examples(examples, captions=None):\n",
" plt.figure(figsize=(16, 4))\n",
" m = len(examples)\n",
" for i, example in enumerate(examples):\n",
" plt.subplot(100 + m * 10 + i + 1)\n",
" plt.imshow(example, cmap=plt.get_cmap('gray'))\n",
" plt.show()\n",
" if captions is not None:\n",
" print(6 * ' ' + (10 * ' ').join(str(captions[i]) for i in range(m)))"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 5,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"data": {
2023-01-27 13:30:25 +01:00
"image/png": "iVBORw0KGgoAAAANSUhEUgAABQcAAADFCAYAAADpJUQuAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAomElEQVR4nO3de3RU1d3G8V8CJFyTyC2BQiQqFRQJGgFRFqBE0KoQoKIUBKwFCwFEK6X4oqCIQVBbrmK1kKIoaBFQvFKuVUPK1S5AIlKEIEkAJRcCJErO+4fLVNx78ExmJjOz9/ez1vnDh30m+4Qnh2Eznh3hOI4jAAAAAAAAAKwTGewJAAAAAAAAAAgOFgcBAAAAAAAAS7E4CAAAAAAAAFiKxUEAAAAAAADAUiwOAgAAAAAAAJZicRAAAAAAAACwFIuDAAAAAAAAgKVYHAQAAAAAAAAsxeIgAAAAAAAAYCkWBwEAAAAAAABL1QzUC8+fP19mzZol+fn5kpycLHPnzpVOnTr97HkVFRVy9OhRadCggURERARqerCU4zhSUlIizZs3l8hI79bGq9ppEXqNwPGl0yLcqxF6gtVpEXqNwOFeDdNwr4aJuFfDNF512gmAZcuWOVFRUc6iRYucPXv2OCNGjHDi4uKcgoKCnz03NzfXEREOjoAeubm51dZpes1RHYe3nfa113SaI9BHdXeaXnNUx8G9msO0g3s1h4kH92oO0w43nQ7I4mCnTp2c9PT0yv8+d+6c07x5cycjI+Nnzy0sLAz6N47D/KOwsLDaOk2vOarj8LbTvvaaTnME+qjuTtNrjuo4uFdzmHZwr+Yw8eBezWHa4abTfn/mYHl5uWzfvl1SU1Mrs8jISElNTZWsrCxlfFlZmRQXF1ceJSUl/p4SoPDm49redlqEXqP6efu/IHCvRqgLdKdF6DWqH/dqmIZ7NUzEvRqmcdNpvy8OnjhxQs6dOyfx8fHn5fHx8ZKfn6+Mz8jIkNjY2MqjZcuW/p4S4BNvOy1CrxH6uFfDNNyrYSLu1TAN92qYiHs1TBD03YonTZokRUVFlUdubm6wpwT4jF7DNHQaJqLXMA2dhonoNUxDpxGK/L5bcePGjaVGjRpSUFBwXl5QUCAJCQnK+OjoaImOjvb3NAC/8bbTIvQaoY97NUzDvRom4l4N03Cvhom4V8MEfv/kYFRUlKSkpMi6desqs4qKClm3bp106dLF318OCDg6DRPRa5iGTsNE9BqmodMwEb2GEbzehseFZcuWOdHR0U5mZqazd+9eZ+TIkU5cXJyTn5//s+cWFRUFfScXDvOPoqKiaus0veaojsPbTvvaazrNEeijujtNrzmq4+BezWHawb2aw8SDezWHaYebTgdkcdBxHGfu3LlOYmKiExUV5XTq1MnZsmWLq/P4weCojqMqN/yqdppec1THUZVO+9JrOs0R6KO6O02vOarj4F7NYdrBvZrDxIN7NYdph5tORziO40gIKS4ultjY2GBPA4YrKiqSmJiYavt69BqBRqdhmurutAi9RuBxr4ZpuFfDRNyrYRo3nQ76bsUAAAAAAAAAgoPFQQAAAAAAAMBSLA4CAAAAAAAAlmJxEAAAAAAAALAUi4MAAAAAAACApVgcBAAAAAAAACzF4iAAAAAAAABgKRYHAQAAAAAAAEuxOAgAAAAAAABYisVBAAAAAAAAwFIsDgIAAAAAAACWqhnsCQCAWykpKdp8zJgxSjZ06FDt2CVLlijZ3LlztWN37NjhxewAAAAAAFUxe/ZsbT5u3Dgl2717t5Ldfvvt2vMPHTrk28QswScHAQAAAAAAAEuxOAgAAAAAAABYisVBAAAAAAAAwFIsDgIAAAAAAACWYkOSEFWjRg1tHhsb69Pr6jZuqFu3rnbs5ZdfrmTp6enasc8884ySDRo0SMnOnj2rPX/GjBlK9vjjj2vHwg4dOnRQsrVr12rHxsTEKJnjONqx99xzj5L16dNHO7ZRo0YXmCEQfnr27KlkS5cuVbLu3btrz8/JyfH7nACdyZMnK5mn9wWRkeq/dffo0UPJNm3a5PO8AMAUDRo00Ob169dXsttuu007tkmTJkr23HPPaceWlZV5MTuYrlWrVko2ZMgQ7diKigola9u2rZK1adNGez4bkrjDJwcBAAAAAAAAS7E4CAAAAAAAAFiKxUEAAAAAAADAUiwOAgAAAAAAAJZicRAAAAAAAACwFLsV+ygxMVGbR0VFKdn111+vHdu1a1cli4uL044dMGCA+8n56MiRI0o2Z84c7dh+/fopWUlJiZJ9+umn2vPZQdBunTp1UrIVK1YomafdunU7E+v6JyJSXl6uZJ52Jb7uuuuUbMeOHa5eE1XXrVs3JfP0e7Ry5cpAT8coHTt2VLKtW7cGYSbA94YPH67NJ06cqGS63Qo98bRjPQCYTrcLrO6e2qVLF+357dq18+nrN2vWTJuPGzfOp9eFWY4fP65kmzdv1o7t06dPoKcD4ZODAAAAAAAAgLVYHAQAAAAAAAAsxeIgAAAAAAAAYCkWBwEAAAAAAABLsSGJFzp06KBk69ev1471tHFCKPL0gO/Jkycr2alTp7Rjly5dqmR5eXlKdvLkSe35OTk5F5oiwlDdunWV7JprrtGOfeWVV5TM08OM3dq/f782nzlzppItW7ZMO/bjjz9WMt3PRUZGhpezw4X06NFDyVq3bq0dy4YkepGR+n/7S0pKUrKLL75YySIiIvw+J0BH1z8Rkdq1a1fzTGCyzp07a/MhQ4YoWffu3ZXsyiuvdP21Hn74YW1+9OhRJdNtSiiif1+UnZ3teg4wT5s2bZRs/Pjx2rGDBw9Wsjp16iiZpz/rc3NzlczTRn9t27ZVsoEDB2rHLliwQMn27dunHQvzlZaWKtmhQ4eCMBP8gE8OAgAAAAAAAJZicRAAAAAAAACwFIuDAAAAAAAAgKVYHAQAAAAAAAAsxYYkXjh8+LCSff3119qx1bkhie4BxYWFhdqxN954o5KVl5drx7788ss+zQt2e+GFF5Rs0KBB1fb1PW1+Ur9+fSXbtGmTdqxuY4z27dv7NC/8vKFDhypZVlZWEGYSvjxt6DNixAgl0z34ngeEIxBSU1OVbOzYsa7P99TL22+/XckKCgrcTwxGueuuu5Rs9uzZ2rGNGzdWMt0mDRs3btSe36RJEyWbNWvWz8zwwl/L0+vefffdrl8X4UH398Wnn35aO1bX6wYNGvj09T1t3te7d28lq1Wrlnas7r6s+7m6UA47xcXFKVlycnL1TwSV+OQgAAAAAAAAYCkWBwEAAAAAAABLsTgIAAAAAAAAWIrFQQAAAAAAAMBSLA4CAAAAAAAAlmK3Yi988803SjZhwgTtWN3OeTt37tSOnTNnjus57Nq1S8luvvlmJSstLdWef+WVVyrZAw884PrrAz+VkpKizW+77TYl87Qrn45uB+G3335bO/aZZ55RsqNHj2rH6n4OT548qR170003KZk314CqiYzk36189dJLL7ke62m3QqCqunbtqs0XL16sZLrdOj3xtAvsoUOHXL8GwlPNmupfWa699lrt2BdffFHJ6tatqx27efNmJZs2bZqSffTRR9rzo6Ojlez111/Xju3Vq5c219m2bZvrsQhf/fr1U7Lf/e53AflaBw4cUDLd3yFFRHJzc5Xssssu8/ucYDfdfTkxMdGn1+zYsaM21+2qzXsHFX8DAwAAAAAAACzF4iAAAAAAAABgKRYHAQAAAAAAAEuxOAgAAAAAAABYyusNSTZv3iyzZs2S7du3S15enqxcuVLS0tIqf91xHJkyZYq8+OKLUlhYKDfccIM8//zz0rp1a3/OO2SsWrVKm69fv17JSkpKtGOTk5OV7L777tOO1W284GnzEZ09e/Yo2ciRI12fbyI67V6HDh2UbO3atdqxMTExSuY4jnbse++9p2SDBg1Ssu7du2vPnzx5spJ52pDh+PHjSvbpp59qx1ZUVCiZbqOVa665Rnv+jh07tHmghUun27dvr83j4+OrdR4m8maTB08/w6EmXHoNkWHDhmnz5s2bu36NjRs3KtmSJUuqOqWQRKfdGzJkiJJ5s/GSp/v
2023-01-23 15:42:40 +01:00
"text/plain": [
2023-01-27 13:30:25 +01:00
"<Figure size 1600x400 with 7 Axes>"
2023-01-23 15:42:40 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" 5 0 4 1 9 2 1\n"
]
}
],
"source": [
"draw_examples(x_train[:7], captions=y_train)"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 6,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"batch_size = 128\n",
"num_classes = 10\n",
"epochs = 12\n",
"\n",
"# input image dimensions\n",
"img_rows, img_cols = 28, 28"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 7,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [],
"source": [
"if keras.backend.image_data_format() == 'channels_first':\n",
" x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)\n",
" x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)\n",
" input_shape = (1, img_rows, img_cols)\n",
"else:\n",
" x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\n",
" x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)\n",
" input_shape = (img_rows, img_cols, 1)"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 8,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"x_train shape: (60000, 28, 28, 1)\n",
"60000 train samples\n",
"10000 test samples\n"
]
}
],
"source": [
"x_train = x_train.astype('float32')\n",
"x_test = x_test.astype('float32')\n",
"x_train /= 255\n",
"x_test /= 255\n",
"print('x_train shape: {}'.format(x_train.shape))\n",
"print('{} train samples'.format(x_train.shape[0]))\n",
"print('{} test samples'.format(x_test.shape[0]))\n",
"\n",
"# convert class vectors to binary class matrices\n",
"y_train = keras.utils.to_categorical(y_train, num_classes)\n",
"y_test = keras.utils.to_categorical(y_test, num_classes)"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 10,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
2023-01-27 13:30:25 +01:00
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-01-27 12:51:13.294000: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\n",
"2023-01-27 12:51:13.295301: W tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:265] failed call to cuInit: UNKNOWN ERROR (303)\n",
"2023-01-27 12:51:13.295539: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ELLIOT): /proc/driver/nvidia/version does not exist\n",
"2023-01-27 12:51:13.298310: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n",
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
2023-01-23 15:42:40 +01:00
"source": [
"model = Sequential()\n",
"model.add(Conv2D(32, kernel_size=(3, 3),\n",
" activation='relu',\n",
" input_shape=input_shape))\n",
"model.add(Conv2D(64, (3, 3), activation='relu'))\n",
"model.add(MaxPooling2D(pool_size=(2, 2)))\n",
"model.add(Dropout(0.25))\n",
"model.add(Flatten())\n",
"model.add(Dense(128, activation='relu'))\n",
"model.add(Dropout(0.5))\n",
"model.add(Dense(num_classes, activation='softmax'))"
]
},
{
"cell_type": "code",
2023-01-27 13:30:25 +01:00
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model: \"sequential\"\n",
"_________________________________________________________________\n",
" Layer (type) Output Shape Param # \n",
"=================================================================\n",
" conv2d (Conv2D) (None, 26, 26, 32) 320 \n",
" \n",
" conv2d_1 (Conv2D) (None, 24, 24, 64) 18496 \n",
" \n",
" max_pooling2d (MaxPooling2D (None, 12, 12, 64) 0 \n",
" ) \n",
" \n",
" dropout (Dropout) (None, 12, 12, 64) 0 \n",
" \n",
" flatten (Flatten) (None, 9216) 0 \n",
" \n",
" dense (Dense) (None, 128) 1179776 \n",
" \n",
" dropout_1 (Dropout) (None, 128) 0 \n",
" \n",
" dense_1 (Dense) (None, 10) 1290 \n",
" \n",
"=================================================================\n",
"Total params: 1,199,882\n",
"Trainable params: 1,199,882\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n"
]
}
],
"source": [
"model.summary()"
]
},
{
"cell_type": "code",
"execution_count": 1,
2023-01-23 15:42:40 +01:00
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
2023-01-27 13:30:25 +01:00
"outputs": [
{
"ename": "NameError",
"evalue": "name 'model' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn [1], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m model\u001b[38;5;241m.\u001b[39mcompile(loss\u001b[38;5;241m=\u001b[39mkeras\u001b[38;5;241m.\u001b[39mlosses\u001b[38;5;241m.\u001b[39mcategorical_crossentropy,\n\u001b[1;32m 2\u001b[0m optimizer\u001b[38;5;241m=\u001b[39mkeras\u001b[38;5;241m.\u001b[39moptimizers\u001b[38;5;241m.\u001b[39mAdadelta(),\n\u001b[1;32m 3\u001b[0m metrics\u001b[38;5;241m=\u001b[39m[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124maccuracy\u001b[39m\u001b[38;5;124m'\u001b[39m])\n",
"\u001b[0;31mNameError\u001b[0m: name 'model' is not defined"
]
}
],
2023-01-23 15:42:40 +01:00
"source": [
"model.compile(loss=keras.losses.categorical_crossentropy,\n",
" optimizer=keras.optimizers.Adadelta(),\n",
" metrics=['accuracy'])"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train on 60000 samples, validate on 10000 samples\n",
"Epoch 1/12\n",
"60000/60000 [==============================] - 333s - loss: 0.3256 - acc: 0.9037 - val_loss: 0.0721 - val_acc: 0.9780\n",
"Epoch 2/12\n",
"60000/60000 [==============================] - 342s - loss: 0.1088 - acc: 0.9683 - val_loss: 0.0501 - val_acc: 0.9835\n",
"Epoch 3/12\n",
"60000/60000 [==============================] - 366s - loss: 0.0837 - acc: 0.9748 - val_loss: 0.0429 - val_acc: 0.9860\n",
"Epoch 4/12\n",
"60000/60000 [==============================] - 311s - loss: 0.0694 - acc: 0.9788 - val_loss: 0.0380 - val_acc: 0.9878\n",
"Epoch 5/12\n",
"60000/60000 [==============================] - 325s - loss: 0.0626 - acc: 0.9815 - val_loss: 0.0334 - val_acc: 0.9886\n",
"Epoch 6/12\n",
"60000/60000 [==============================] - 262s - loss: 0.0552 - acc: 0.9835 - val_loss: 0.0331 - val_acc: 0.9890\n",
"Epoch 7/12\n",
"60000/60000 [==============================] - 218s - loss: 0.0494 - acc: 0.9852 - val_loss: 0.0291 - val_acc: 0.9903\n",
"Epoch 8/12\n",
"60000/60000 [==============================] - 218s - loss: 0.0461 - acc: 0.9859 - val_loss: 0.0294 - val_acc: 0.9902\n",
"Epoch 9/12\n",
"60000/60000 [==============================] - 219s - loss: 0.0423 - acc: 0.9869 - val_loss: 0.0287 - val_acc: 0.9907\n",
"Epoch 10/12\n",
"60000/60000 [==============================] - 218s - loss: 0.0418 - acc: 0.9875 - val_loss: 0.0299 - val_acc: 0.9906\n",
"Epoch 11/12\n",
"60000/60000 [==============================] - 218s - loss: 0.0388 - acc: 0.9879 - val_loss: 0.0304 - val_acc: 0.9905\n",
"Epoch 12/12\n",
"60000/60000 [==============================] - 218s - loss: 0.0366 - acc: 0.9889 - val_loss: 0.0275 - val_acc: 0.9910\n"
]
},
{
"data": {
"text/plain": [
"<keras.callbacks.History at 0x7f70b80b1a10>"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.fit(x_train, y_train,\n",
" batch_size=batch_size,\n",
" epochs=epochs,\n",
" verbose=1,\n",
" validation_data=(x_test, y_test))"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"('Test loss:', 0.027530849870144449)\n",
"('Test accuracy:', 0.99099999999999999)\n"
]
}
],
"source": [
"score = model.evaluate(x_test, y_test, verbose=0)\n",
"print('Test loss:', score[0])\n",
"print('Test accuracy:', score[1])"
]
}
],
"metadata": {
"author": "Paweł Skórzewski",
"celltoolbar": "Slideshow",
"email": "pawel.skorzewski@amu.edu.pl",
"kernelspec": {
2023-01-23 16:01:45 +01:00
"display_name": "Python 3 (ipykernel)",
2023-01-23 15:42:40 +01:00
"language": "python",
"name": "python3"
},
"lang": "pl",
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2024-01-18 17:31:30 +01:00
"version": "3.10.12"
2023-01-23 15:42:40 +01:00
},
"livereveal": {
"start_slideshow_at": "selected",
"theme": "white"
},
"subtitle": "12.Splotowe sieci neuronowe[wykład]",
"title": "Uczenie maszynowe",
"vscode": {
"interpreter": {
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
},
"year": "2021"
},
"nbformat": 4,
"nbformat_minor": 4
}