852 lines
48 KiB
Plaintext
852 lines
48 KiB
Plaintext
|
{
|
|||
|
"cells": [
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "slide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"## Uczenie maszynowe UMZ 2019/2020\n",
|
|||
|
"### 9 czerwca 2020\n",
|
|||
|
"# 13. Konwolucyjne sieci neuronowe"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "slide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Konwolucyjne sieci neuronowe wykorzystuje się do:\n",
|
|||
|
"\n",
|
|||
|
"* rozpoznawania obrazu\n",
|
|||
|
"* analizy wideo\n",
|
|||
|
"* innych zagadnień o podobnej strukturze"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Innymi słowy, CNN przydają się, gdy mamy bardzo dużo danych wejściowych, w których istotne jest ich sąsiedztwo."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "slide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"### Warstwy konwolucyjne"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Dla uproszczenia przyjmijmy, że mamy dane w postaci jendowymiarowej – np. chcemy stwierdzić, czy na danym nagraniu obecny jest głos człowieka."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Tak wygląda nasze nagranie:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-xs.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"(ciąg próbek dźwiękowych – możemy traktować je jak jednowymiarowe „piksele”)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Najprostsza metoda – „zwykła” jednowarstwowa sieć neuronowa (każdy z każdym):"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-F.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Wady:\n",
|
|||
|
"\n",
|
|||
|
"* dużo danych wejściowych\n",
|
|||
|
"* nie wykrywa własności „lokalnych” wejścia"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Chcielibyśmy wykrywać pewne lokalne „wzory” w danych wejściowych.\n",
|
|||
|
"\n",
|
|||
|
"W tym celu tworzymy mniejszą sieć neuronową (mniej neuronów wejściowych) i _kopiujemy_ ją tak, żeby każda jej kopia działała na pewnym fragmencie wejścia (fragmenty mogą nachodzić na siebie):"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Każda z sieci A ma 2 neurony wejściowe (mało realistycznie). "
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" width=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv3.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Każda z sieci A ma 3 neurony wejściowe (wciąż mało realistycznie, ale już trochę bardziej). "
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Warstwę sieci A nazywamy **warstwą konwolucyjną** (konwolucja = splot).\n",
|
|||
|
"\n",
|
|||
|
"Warstw konwolucyjnych może być więcej niż jedna:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-9-Conv2Conv2.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"W dwóch wymiarach wygląda to tak:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Conv2.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Zblizenie na pojedynczą jednostkę A:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-unit.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Tak definiujemy formalnie funckję splotu dla 2 wymiarów:\n",
|
|||
|
"\n",
|
|||
|
"$$\n",
|
|||
|
"\\left[\\begin{array}{ccc}\n",
|
|||
|
"a & b & c\\\\\n",
|
|||
|
"d & e & f\\\\\n",
|
|||
|
"g & h & i\\\\\n",
|
|||
|
"\\end{array}\\right]\n",
|
|||
|
"*\n",
|
|||
|
"\\left[\\begin{array}{ccc}\n",
|
|||
|
"1 & 2 & 3\\\\\n",
|
|||
|
"4 & 5 & 6\\\\\n",
|
|||
|
"7 & 8 & 9\\\\\n",
|
|||
|
"\\end{array}\\right] \n",
|
|||
|
"=\\\\\n",
|
|||
|
"(1 \\cdot a)+(2 \\cdot b)+(3 \\cdot c)+(4 \\cdot d)+(5 \\cdot e)\\\\+(6 \\cdot f)+(7 \\cdot g)+(8 \\cdot h)+(9 \\cdot i)\n",
|
|||
|
"$$\n",
|
|||
|
"\n",
|
|||
|
"Więcej: https://en.wikipedia.org/wiki/Kernel_(image_processing)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"A tak to mniej więcej działa:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"https://devblogs.nvidia.com/wp-content/uploads/2015/11/Convolution_schematic.gif\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Jednostka warstwy konwolucyjnej może się składać z jednej lub kilku warstw neuronów:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-A.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv-A-NIN.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Jeden neuron może odpowiadać np. za wykrywanie pionowych krawędzi, drugi poziomych, a jeszcze inny np. krzyżujących się linii."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Przykładowe filtry, których nauczyła się pierwsza warstwa konwolucyjna:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-filters.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "slide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"### _Pooling_"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Obrazy składają się na ogół z milionów pikseli. Oznacza to, że nawet po zastosowaniu kilku warstw konwolucyjnych mielibyśmy sporo parametrów do wytrenowania.\n",
|
|||
|
"\n",
|
|||
|
"Żeby zredukować liczbę parametrów, a dzięki temu uprościć obliczenia, stosuje się warstwy **_pooling_**.\n",
|
|||
|
"\n",
|
|||
|
"_Pooling_ to rodzaj próbkowania. Najpopularniejszą jego odmianą jest _max-pooling_, czyli wybieranie najwyższej wartości spośród kilku sąsiadujących pikseli."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"https://upload.wikimedia.org/wikipedia/commons/e/e9/Max_pooling.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Warstwy _pooling_ i konwolucyjne można przeplatać ze sobą:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/Conv2-9x5-Conv2Max2Conv2.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"_Pooling_ – idea: nie jest istotne, w którym *dokładnie* miejscu na obrazku dana cecha (krawędź, oko, itp.) się znajduje, wystarczy przybliżona lokalizacja."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"Do sieci konwolucujnych możemy dokładać też warstwy ReLU."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 8,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAUDBAgHCAkICAgGCAkFBwcICAYGBwcICAgHBwgHBwcH\nBwcHChALBwgOCQcIDRUNDhERExMTCAsWGBYSGBASExIBBQUFCAcIDwgIDxIPDw0SEhUSEhISFxUS\nEhISEhUSEhIVEhISFRISFRIVFRUSEhUVFRIVEhIVFRIVFRIVFRUVFf/AABEIAWgB4AMBIgACEQED\nEQH/xAAcAAEAAgMBAQEAAAAAAAAAAAAAAwYCBAUHCAH/xABYEAABAwEDBggICAgMBAcAAAAAAgME\nEgEFEwYUIzNTkhEXIjJDUnLTByQxNEJjc4MIFSFEYnSTw1RhZHGBgpSiFiU1QVGRoaOxs7TEhKTw\n81WFwdHU4eP/xAAbAQEAAwADAQAAAAAAAAAAAAAAAQIDBAUHBv/EADYRAQABAwEGBQMDAgUFAAAA\nAAACAwQSARMVFiIyUhEUQlGRITEzBQY0Q/AjQWJyoSRjcYGT/9oADAMBAAIRAxEAPwD4yAAAAAAA\nAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\nBlwDgL5xZTdtD33u5HFlN20Pfe7k7Hdd12ODvO079PlQ+AcBfOLKbtoe+93I4spu2h773cjdd12G\n87Xv0+VD4BwF84spu2h773cjiym7aHvvdyN13XYbztO/T5UPgHAXziym7aHvvdyOLKbtoe+93I3X\nddhvO079PlQ+AcBfOLKbtoe+93I4spu2h773cjdd12G87Tv0+VD4BwF84spu2h773cjiym7aHvvd\nyN13XYbztO/T5UPgHAXziym7aHvvdyOLKbtoe+93I3XddhvO079PlQ+AcBfOLKbtoe+93I4spu2h\n773cjdd12G87Xv0+VD4BwF84spu2h773cjiym7aHvvdyN13XYbztO/T5UPgHAXziym7aHvvdyOLK\nbtoe+93I3VddmpvO179PlQ+AcBfOLKbtoe+93I4spu2h773cjdd12G87Tv0+VD4BwF84spu2h773\ncjiym7aHvvdyN13XYbztO/T5UPgHAXziym7aHvvdyOLKbtoe+93I3VddmpvO179PlQ+AcBfOLKbt\noe+93I4spu2h773cjdd12G87Tv0+VD4BwF84spu2h773cjiym7aHvvdyN13XYbztO/T5UPgHAXzi\nym7aHvvdyOLKbtoe+93I3XddhvO079PlQ+AcBfOLKbtoe+93I4spu2h773cjdd12G87Tv0+VD4Bw\nF84spu2h773cjiym7aHvvdyN13XYbztO/T5UPgHAXziym7aHvvdyOLKbtoe+93I3XddhvO079PlQ\n+AcBfOLKbtoe+93I4spu2h773cjdd12G87Tv0+VD4BwF84spu2h773cjiym7aHvvdyN13XYbztO/\nT5UPgHAXziym7aHvvdyOLKbtoe+93I3XddhvO079PlQ+AcBfOLKbtoe+93I4spu2h773cjdd12G8\n7Tv0+VD4BwF84spu2h773cjiym7aHvvdyN13XYbztO/T5UPgHAXziym7aHvvdyOLKbtoe+93I3Xd\ndhvO079PlQ+AcBfOLKbtoe+93I4spu2h773cjdd12G87Tv0+VD4BwF84spu2h773cjiym7aHvvdy\nN13XYbztO/T5UPgHAXziym7aHvvdyOLKbtoe+93I3XddhvO079PlQ+AcBfOLKbtoe+93I4spu2h7\n73cjdd12G87Tv0+VD4BwF84spu2h773cjiym7aHvvdyN13XYbztO/T5esgA9EeXgNliA+uypDElx\nO0bYfIXEKTbSpNKguwAAUAZtoUq2lKVqVs2zAAAAeIAZuIUmxKlJXS50gGAAAAmiRlvuJaaQtxbn\nMbbGbOJbS6pteHIXQhzaBdCAAoAAuAAKACaJGW+qhpC3Fc/DbIQAALgACgAAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAAAAAAAAAAAALled6SYt03Vm78lnE+Ma8BeH89NrKSAu8fi/GVhznLrlypK3G\n+ezHziRHzj8owCF/KCVCum60xn8PE+Ma+Qxtjn5EzHH7xqedqcnsS2cd9fTSIciPHOt2c8c/97s8\n4ZYf7HIuK7lTFOosVTm8WXJ/Z2c4LPeV1XZ8WQVqmLbU4uXW+3C0rn9+Z5G3DLjqmLlNLjpbuu8E\nafR4juD83NKdDdk3PBUyhbiYj1442H0fm5apU1lPkl/f1VhDCPS7N9XPGRekVu75i2X3G4lGHFw6\nNDrzgRMn0KipvCVKwWnH3mdGjEccej/g5ZnIy/jq7ZVC8Bxi7qH+i8yKrfv8lQfrt7f7crRqT8YR\nWrU4c037LyZUpyJmTmdNXvyIzi0YbmL84YkGzByZgyn0Q2bzqkrXRiLi+LOfV3zrZGS2mo124q0N\npclXszW56GcM5uc/IzJKc1ekbGYWyiHKarff1f7QXnWnz86mxhy8qpy2cJ1xqqrN1vILZkvAS7Zd\nmdyl5q/NeQzEwMTTY0f/ADis3751J+tPf5xZrteSiLcalKQlLd6OrX9tHNK/jgwoYZuTlnGisS3U\nxn1uJx3q28DDw9NqDTumNGXWuVJWylvo22MR1w2cs4D8edJxWlt5xKeWjE6RnGOzk3BUmAmVEiom\nyXJTrL2IjOc0a+b+L+uL7Twomz/xU+S1zpi3pdrrLucRp+Mtl+jDd1MnOGJEc5M7+RYP128f9uXZ\ntbuPcOcZsl1td44zbCGG2m/UFJnfyLB+u3j/ALc4tOc5z0/vvcqpDCH9/wCls+DqBDkOSs5XSpu7\npa0N4OJ0OvNaJcLCrHZKplMFhdCJbjGlfe2EeOfng6RXKdbTrJF3XiyhvaPYJ0/ip+RdaIqGl53d\nE11b0CjS4MhnX5uXnPWE+pjCGdJyryuFGBnUJ9chptbLLzbjGHJbekeb+Lkz+T0NhSY0q8MOX6bb\nbGJGYe2EiQdPJ9C7pjOuzULZzuVdyGWHNa5m8zOJD+bmllBkxMdnuraaW4xLfdeZl/NlsyHtfnA2\n+vRkvhyZYoZWSS2pkqO6+y21dLbK5M+jRkfxDDW1nTMx5UaO+0iTiRfGWM4838X+cl6vJ5Wd3zDj\npjPSXF3c8y3IRiNPsx2fGM3/AGgqV+zLzTEcbehxorEtbKF4cViM6szp15zXnQhFZkXVERlBhRHa\nnHES0ZpgYbTf8WfhBUG8m4ztjrUWdjSYjDy8PAw2n8384zeQWl2GtN/Lk2oWliXFlrZf6Jz+LSp+\nDbz5P1WX/o5Ip5w8Z5f5E8OhHEuSMhhD8+UuOmX5swwxiOuM7f6ubUTIxbspTCH2VNuQXZrMvonI\nh3JE+U/DguwocOa2xBZiveJMSXWHo5DHmTFOyUS0Mx1RLgvFCGG0MN4bJO3rGwoqffLMZpSUxX1y\nE+mtbGGaAB2ujq6gAAkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQK/RABcTPzHV891\n5WHtFmDby02KSla0pc57ba9YYAbPRG0Z5yulKK10t8xuswrVwUgBKaJSpTaXVLS1Xy8PSYZbLtmQ\nYDiJPxjJnqicuNBwH228b8ozkpoM6lvmvCvgzfWpalLVznF1mFYBoozfeWumtxasPmYizNiStrmL\nW3ibNeGQgHizxldZZhX8lIAPFm2tSbakqpUZ5y7ViVvYm0r0pCCBm+8t22pa1qVtHF4hZoiLuRYl\nS71mKY564DcV9txz1GvwCrApUpZrQqYOnft6qmS3ZWrU+utCEdGaT8lbvPWtz2i8QhBNOnCCm0TZ\nyvgSmtdLfMbr1ZC2tSeaAXSmYkra1Ti2/ZrwzCtXCpVS6nOeYAAAAAACAABIAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAAA
|
|||
|
"text/html": [
|
|||
|
"\n",
|
|||
|
" <iframe\n",
|
|||
|
" width=\"800\"\n",
|
|||
|
" height=\"600\"\n",
|
|||
|
" src=\"https://www.youtube.com/embed/FmpDIaiMIeA\"\n",
|
|||
|
" frameborder=\"0\"\n",
|
|||
|
" allowfullscreen\n",
|
|||
|
" ></iframe>\n",
|
|||
|
" "
|
|||
|
],
|
|||
|
"text/plain": [
|
|||
|
"<IPython.lib.display.YouTubeVideo at 0x7f70ba22e910>"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": 8,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"import IPython\n",
|
|||
|
"IPython.display.YouTubeVideo('FmpDIaiMIeA', width=800, height=600)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "slide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"### Możliwości konwolucyjnych sieci neuronowych"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"<img style=\"margin: auto\" height=\"80%\" src=\"http://colah.github.io/posts/2014-07-Conv-Nets-Modular/img/KSH-results.png\"/>"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "slide"
|
|||
|
}
|
|||
|
},
|
|||
|
"source": [
|
|||
|
"### Przykład: MNIST"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 23,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "notes"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"%matplotlib inline\n",
|
|||
|
"\n",
|
|||
|
"import math\n",
|
|||
|
"import matplotlib.pyplot as plt\n",
|
|||
|
"import numpy as np\n",
|
|||
|
"import random\n",
|
|||
|
"\n",
|
|||
|
"from IPython.display import YouTubeVideo"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 24,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "notes"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"# źródło: https://github.com/keras-team/keras/examples/minst_mlp.py\n",
|
|||
|
"\n",
|
|||
|
"import keras\n",
|
|||
|
"from keras.datasets import mnist\n",
|
|||
|
"\n",
|
|||
|
"from keras.models import Sequential\n",
|
|||
|
"from keras.layers import Dense, Dropout, Flatten\n",
|
|||
|
"from keras.layers import Conv2D, MaxPooling2D\n",
|
|||
|
"\n",
|
|||
|
"# załaduj dane i podziel je na zbiory uczący i testowy\n",
|
|||
|
"(x_train, y_train), (x_test, y_test) = mnist.load_data()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 25,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "notes"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"def draw_examples(examples, captions=None):\n",
|
|||
|
" plt.figure(figsize=(16, 4))\n",
|
|||
|
" m = len(examples)\n",
|
|||
|
" for i, example in enumerate(examples):\n",
|
|||
|
" plt.subplot(100 + m * 10 + i + 1)\n",
|
|||
|
" plt.imshow(example, cmap=plt.get_cmap('gray'))\n",
|
|||
|
" plt.show()\n",
|
|||
|
" if captions is not None:\n",
|
|||
|
" print(6 * ' ' + (10 * ' ').join(str(captions[i]) for i in range(m)))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 26,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "fragment"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA6IAAACPCAYAAADgImbyAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAHEtJREFUeJzt3XmQVNXZx/HniIAYREQIISKCggiR\nTUDB1wITwBVZJKKEPUYoUYSUUKASgzEIolLFIlEkMIKUaIVVI0EElKiEAgnmZXXAyJYRUEE2Iy96\n3z/o5T5Hpqd7uvvc2z3fT9UU9ze3u+/pnme659D93GM8zxMAAAAAAFw5J+gBAAAAAADKFiaiAAAA\nAACnmIgCAAAAAJxiIgoAAAAAcIqJKAAAAADAKSaiAAAAAACnmIgCAAAAAJxiIgoAAAAAcCqtiagx\n5hZjzA5jzE5jzOhMDQq5h1pAFLUAEeoAcdQCRKgDxFELiPE8r1RfIlJORHaJyOUiUkFEPhaRxiVc\nx+Mr574OZboWQnCf+MpCHVALZeYr488J1ELOfvH6wFdW6oBayNkvaoGvpGvB87y03hG9VkR2ep73\nqed5p0Rkvoh0TeP2EE67k7gMtZD/kqkDEWqhLOA5AVHUAkSoA8RRC4hK6u/GdCail4jIXl/eF/me\nYowZZIzZYIzZkMaxEG4l1gJ1UGZQCxDh9QFxPCdAhOcExFELiDk32wfwPG+GiMwQETHGeNk+HsKJ\nOkAUtYAoagEi1AHiqAVEUQtlQzrviO4XkUt9uXbkeyh7qAVEUQsQoQ4QRy1AhDpAHLWAmHQmoutF\npIExpp4xpoKI3CMiSzMzLOQYagFR1AJEqAPEUQsQoQ4QRy0gptQfzfU877Qx5kERWS5nzoA1y/O8\nLRkbGXIGtYAoagEi1AHiqAWIUAeIoxbgZyKnRXZzMD7jnYs+8jyvVSZvkDrISRmvAxFqIUdRC4ji\n9QEiPCcgjlpAVFK1kM5HcwEAAAAASBkTUQAAAACAU0xEAQAAAABOMREFAAAAADjFRBQAAAAA4BQT\nUQAAAACAU0xEAQAAAABOMREFAAAAADh1btADAPJVy5YtVX7wwQdV7tevn8pz5sxReerUqSpv3Lgx\ng6MDAABANk2ePFnlhx56KLa9efNmta9z584q7969O3sDCwneEQUAAAAAOMVEFAAAAADgFB/NTVK5\ncuVUvvDCC5O+rv2RzPPPP1/lhg0bqvzAAw+o/Oyzz6rcq1cvlf/73/+qPGHChNj2E088kfQ4kZ7m\nzZurvGLFCpWrVKmisud5Kvft21flLl26qHzxxRenO0TkiQ4dOqg8b948ldu3b6/yjh07sj4mZMeY\nMWNUtp/TzzlH/3/yjTfeqPJ7772XlXEByIwLLrhA5cqVK6t8++23q1yjRg2VJ02apPK3336bwdEh\nVXXr1lW5T58+Kn///fex7UaNGql9V111lcp8NBcAAAAAgAxjIgoAAAAAcIqJKAAAAADAqTLTI1qn\nTh2VK1SooPL111+v8g033KBy1apVVe7Ro0fGxrZv3z6Vp0yZonL37t1VPnbsmMoff/yxyvQEuXPt\ntdfGthcsWKD22X3Edk+o/XM8deqUynZPaJs2bVS2l3Oxr18WtGvXLrZtP16LFi1yPRxnWrdurfL6\n9esDGgkybcCAASqPGjVKZX9/0dnYzzMAgufvG7R/p9u2bavy1VdfndJt16pVS2X/8iBw79ChQyqv\nWbNGZfv8H2Ud74gCAAAAAJxiIgoAAAAAcIqJKAAAAADAqbztEbXXdFy1apXKqawDmml2j4+9Ttzx\n48dVttcILCoqUvnw4cMqs2Zg5thrvl5zzTUqv/LKK7Ftu0+jJIWFhSpPnDhR5fnz56v8wQcfqGzX\nzfjx41M6fj7wr5nYoEEDtS+fekTttSLr1aun8mWXXaayMSbrY0J22D/L8847L6CRIFXXXXedyv71\nA+21fX/2s58lvK0RI0ao/J///Edl+zwW/tciEZF169YlHiwyyl7/cfjw4Sr37t07tl2pUiW1z36+\n3rt3r8r2+STstSd79uyp8vTp01Xevn17ccNGFpw4cULlsrAWaDp4RxQAAAAA4BQTUQAAAACAU0xE\nAQAAAABO5W2P6J49e1T+8ssvVc5kj6jdi3HkyBGVf/7zn6tsr/c4d+7cjI0FmfXiiy+q3KtXr4zd\ntt1vWrlyZZXt9WD9/ZAiIk2bNs3YWHJVv379Yttr164NcCTZZfcf33fffSrb/WH0BOWOjh07qjx0\n6NCEl7d/tp07d1b5wIEDmRkYSnT33XerPHnyZJWrV68e27b7AN99912Va9SoofIzzzyT8Nj27dnX\nv+eeexJeH6mx/2Z8+umnVbZr4YILLkj6tu3zRdx8880qly9fXmX7OcBfZ2fLcKtq1aoqN2vWLKCR\n5AbeEQUAAAAAOMVEFAAAAADgFBNRAAAAAIBTedsj+tVXX6k8cuRIle2+mn/+858qT5kyJeHtb9q0\nKbbdqVMntc9eQ8heL2zYsGEJbxvBadmypcq33367yonWZ7R7Ot944w2Vn332WZXtdeHsGrTXh/3F\nL36R9FjKCnt9zXw1c+bMhPvtHiOEl73+4+zZs1Uu6fwFdu8ga9Rlz7nn6j+RWrVqpfJLL72ksr3u\n9Jo1a2LbTz75pNr3/vvvq1yxYkWVX3/9dZVvuummhGPdsGFDwv1IT/fu3VX+zW9+U+rb2rVrl8r2\n35D2OqL169cv9bHgnv08UKdOnaSv27p1a5XtfuB8fL4vG3/FAQAAAABCg4koAAAAAMCpEieixphZ\nxpiDxpjNvu9VM8asMMYURv69KLvDRBhQC4iiFiBCHSCOWkAUtQAR6gDJSaZHtEBEponIHN/3RovI\nSs/zJhhjRkfyqMwPL3MWL16s8qpVq1Q+duyYyva6P/fee6/K/n4/uyfUtmXLFpUHDRqUeLDhVSB5\nUAt+zZs3V3nFihUqV6lSRWXP81RetmxZbNteY7R9+/YqjxkzRmW77+/QoUMqf/zxxyp///33Ktv9\nq/a6pBs3bpQsKpAAasFeO7VmzZqZvPnQKqlv0K5bhwokz54Tsq1///4q//SnP014eXu9yTlz5pz9\ngsErkDyrhT59+qhcUq+2/XvoX1vy6NGjCa9rr0NZUk/ovn37VH755ZcTXt6xAsmzWrjrrrtSuvxn\nn32m8vr162Pbo0bpu233hNoaNWqU0rFDpEDyrA6SYZ//o6CgQOWxY8cWe11735EjR1SeNm1aOkML\npRLfEfU8b42IfGV9u6uIRJ/1XhaRbhkeF0KIWkAUtQAR6gBx1AKiqAWIUAdITmnPmlvT87yiyPbn\nIlLs2xLGmEEikrNvAaJESdUCdVAmUAsQ4fUBcTwnIIpagAivD7CkvXyL53meMcZLsH+GiMwQEUl0\nOeS+RLVAHZQt1AJEeH1AHM8JiKIWIMLrA84o7UT0gDGmlud5RcaYWiJyMJODcqGkfo2vv/464f77\n7rsvtv3aa6+pfXYvX57LqVq48sorVbbXl7V78b744guVi4qKVPb35Rw/flzt++tf/5owp6tSpUoq\nP/zwwyr37t07o8dLQtZr4bbbblPZfgzyhd37Wq9evYSX379/fzaHk6qcek7IturVq6v861//WmX7\n9cLuCfrjH/+YnYG5kVO1YK/1+eijj6psnyNg+vTpKtvnASjp7wy/xx57LOnLiog89NBDKtvnGAih\nnKoFm/9vPpEfnuvj7bffVnnnzp0qHzxY+rubZ+dCyOk6KA37eSVRj2hZVNrlW5aKSPSMC/1FZElm\nhoMcRC0gilqACHWAOGoBUdQCRKgDWJJZvuVVEVkrIg2NMfuMMfeKyAQR6WSMKRSRjpGMPEctIIpa\ngAh1gDhqAVHUAkSoAySnxI/mep7Xq5hdHTI8FoQctYAoagEi1AHiqAVEUQsQoQ6QnLRPVpSv7M9w\nt2zZUmX/GpEdO3ZU++x
|
|||
|
"text/plain": [
|
|||
|
"<matplotlib.figure.Figure at 0x7f70ba2e9090>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {},
|
|||
|
"output_type": "display_data"
|
|||
|
},
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
" 5 0 4 1 9 2 1\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"draw_examples(x_train[:7], captions=y_train)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 27,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"batch_size = 128\n",
|
|||
|
"num_classes = 10\n",
|
|||
|
"epochs = 12\n",
|
|||
|
"\n",
|
|||
|
"# input image dimensions\n",
|
|||
|
"img_rows, img_cols = 28, 28"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 28,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "notes"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"if keras.backend.image_data_format() == 'channels_first':\n",
|
|||
|
" x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)\n",
|
|||
|
" x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)\n",
|
|||
|
" input_shape = (1, img_rows, img_cols)\n",
|
|||
|
"else:\n",
|
|||
|
" x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\n",
|
|||
|
" x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)\n",
|
|||
|
" input_shape = (img_rows, img_cols, 1)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 29,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"x_train shape: (60000, 28, 28, 1)\n",
|
|||
|
"60000 train samples\n",
|
|||
|
"10000 test samples\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"x_train = x_train.astype('float32')\n",
|
|||
|
"x_test = x_test.astype('float32')\n",
|
|||
|
"x_train /= 255\n",
|
|||
|
"x_test /= 255\n",
|
|||
|
"print('x_train shape: {}'.format(x_train.shape))\n",
|
|||
|
"print('{} train samples'.format(x_train.shape[0]))\n",
|
|||
|
"print('{} test samples'.format(x_test.shape[0]))\n",
|
|||
|
"\n",
|
|||
|
"# convert class vectors to binary class matrices\n",
|
|||
|
"y_train = keras.utils.to_categorical(y_train, num_classes)\n",
|
|||
|
"y_test = keras.utils.to_categorical(y_test, num_classes)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 30,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"model = Sequential()\n",
|
|||
|
"model.add(Conv2D(32, kernel_size=(3, 3),\n",
|
|||
|
" activation='relu',\n",
|
|||
|
" input_shape=input_shape))\n",
|
|||
|
"model.add(Conv2D(64, (3, 3), activation='relu'))\n",
|
|||
|
"model.add(MaxPooling2D(pool_size=(2, 2)))\n",
|
|||
|
"model.add(Dropout(0.25))\n",
|
|||
|
"model.add(Flatten())\n",
|
|||
|
"model.add(Dense(128, activation='relu'))\n",
|
|||
|
"model.add(Dropout(0.5))\n",
|
|||
|
"model.add(Dense(num_classes, activation='softmax'))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 31,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"model.compile(loss=keras.losses.categorical_crossentropy,\n",
|
|||
|
" optimizer=keras.optimizers.Adadelta(),\n",
|
|||
|
" metrics=['accuracy'])"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 32,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"Train on 60000 samples, validate on 10000 samples\n",
|
|||
|
"Epoch 1/12\n",
|
|||
|
"60000/60000 [==============================] - 333s - loss: 0.3256 - acc: 0.9037 - val_loss: 0.0721 - val_acc: 0.9780\n",
|
|||
|
"Epoch 2/12\n",
|
|||
|
"60000/60000 [==============================] - 342s - loss: 0.1088 - acc: 0.9683 - val_loss: 0.0501 - val_acc: 0.9835\n",
|
|||
|
"Epoch 3/12\n",
|
|||
|
"60000/60000 [==============================] - 366s - loss: 0.0837 - acc: 0.9748 - val_loss: 0.0429 - val_acc: 0.9860\n",
|
|||
|
"Epoch 4/12\n",
|
|||
|
"60000/60000 [==============================] - 311s - loss: 0.0694 - acc: 0.9788 - val_loss: 0.0380 - val_acc: 0.9878\n",
|
|||
|
"Epoch 5/12\n",
|
|||
|
"60000/60000 [==============================] - 325s - loss: 0.0626 - acc: 0.9815 - val_loss: 0.0334 - val_acc: 0.9886\n",
|
|||
|
"Epoch 6/12\n",
|
|||
|
"60000/60000 [==============================] - 262s - loss: 0.0552 - acc: 0.9835 - val_loss: 0.0331 - val_acc: 0.9890\n",
|
|||
|
"Epoch 7/12\n",
|
|||
|
"60000/60000 [==============================] - 218s - loss: 0.0494 - acc: 0.9852 - val_loss: 0.0291 - val_acc: 0.9903\n",
|
|||
|
"Epoch 8/12\n",
|
|||
|
"60000/60000 [==============================] - 218s - loss: 0.0461 - acc: 0.9859 - val_loss: 0.0294 - val_acc: 0.9902\n",
|
|||
|
"Epoch 9/12\n",
|
|||
|
"60000/60000 [==============================] - 219s - loss: 0.0423 - acc: 0.9869 - val_loss: 0.0287 - val_acc: 0.9907\n",
|
|||
|
"Epoch 10/12\n",
|
|||
|
"60000/60000 [==============================] - 218s - loss: 0.0418 - acc: 0.9875 - val_loss: 0.0299 - val_acc: 0.9906\n",
|
|||
|
"Epoch 11/12\n",
|
|||
|
"60000/60000 [==============================] - 218s - loss: 0.0388 - acc: 0.9879 - val_loss: 0.0304 - val_acc: 0.9905\n",
|
|||
|
"Epoch 12/12\n",
|
|||
|
"60000/60000 [==============================] - 218s - loss: 0.0366 - acc: 0.9889 - val_loss: 0.0275 - val_acc: 0.9910\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"<keras.callbacks.History at 0x7f70b80b1a10>"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": 32,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"model.fit(x_train, y_train,\n",
|
|||
|
" batch_size=batch_size,\n",
|
|||
|
" epochs=epochs,\n",
|
|||
|
" verbose=1,\n",
|
|||
|
" validation_data=(x_test, y_test))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 33,
|
|||
|
"metadata": {
|
|||
|
"slideshow": {
|
|||
|
"slide_type": "subslide"
|
|||
|
}
|
|||
|
},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"('Test loss:', 0.027530849870144449)\n",
|
|||
|
"('Test accuracy:', 0.99099999999999999)\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"score = model.evaluate(x_test, y_test, verbose=0)\n",
|
|||
|
"print('Test loss:', score[0])\n",
|
|||
|
"print('Test accuracy:', score[1])"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"metadata": {
|
|||
|
"celltoolbar": "Slideshow",
|
|||
|
"kernelspec": {
|
|||
|
"display_name": "Python 3",
|
|||
|
"language": "python",
|
|||
|
"name": "python3"
|
|||
|
},
|
|||
|
"language_info": {
|
|||
|
"codemirror_mode": {
|
|||
|
"name": "ipython",
|
|||
|
"version": 3
|
|||
|
},
|
|||
|
"file_extension": ".py",
|
|||
|
"mimetype": "text/x-python",
|
|||
|
"name": "python",
|
|||
|
"nbconvert_exporter": "python",
|
|||
|
"pygments_lexer": "ipython3",
|
|||
|
"version": "3.8.3"
|
|||
|
},
|
|||
|
"livereveal": {
|
|||
|
"start_slideshow_at": "selected",
|
|||
|
"theme": "amu"
|
|||
|
}
|
|||
|
},
|
|||
|
"nbformat": 4,
|
|||
|
"nbformat_minor": 4
|
|||
|
}
|