Compare commits

...

2 Commits

Author SHA1 Message Date
Paweł Skórzewski
6bbf32ab04 Drobne poprawki do przykładu CNN w Kerasie 2023-06-01 10:31:27 +02:00
Paweł Skórzewski
ce3b8d0e9d Aktualizacja wykładów 12 i 13 2023-06-01 10:31:04 +02:00
3 changed files with 79 additions and 56 deletions

View File

@ -1,7 +1,6 @@
{ {
"cells": [ "cells": [
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"metadata": { "metadata": {
"slideshow": { "slideshow": {
@ -14,7 +13,6 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
@ -26,7 +24,6 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
@ -35,9 +32,23 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 1,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-06-01 10:29:41.492705: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n",
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2023-06-01 10:29:42.477407: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n",
"2023-06-01 10:29:42.477524: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n",
"2023-06-01 10:29:45.603958: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\n",
"2023-06-01 10:29:45.604816: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\n",
"2023-06-01 10:29:45.604834: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n"
]
}
],
"source": [ "source": [
"# Konieczne importy\n", "# Konieczne importy\n",
"\n", "\n",
@ -48,15 +59,13 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 2,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n",
"11493376/11490434 [==============================] - 1s 0us/step\n",
"x_train shape: (60000, 28, 28, 1)\n", "x_train shape: (60000, 28, 28, 1)\n",
"60000 train samples\n", "60000 train samples\n",
"10000 test samples\n" "10000 test samples\n"
@ -89,7 +98,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 3,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -101,18 +110,38 @@
" Layer (type) Output Shape Param # \n", " Layer (type) Output Shape Param # \n",
"=================================================================\n", "=================================================================\n",
" conv2d (Conv2D) (None, 26, 26, 32) 320 \n", " conv2d (Conv2D) (None, 26, 26, 32) 320 \n",
"_________________________________________________________________\n", " \n",
"max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 \n", " max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0 \n",
"_________________________________________________________________\n", " ) \n",
"conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n", " \n",
"_________________________________________________________________\n", " conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n"
"max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n", ]
"_________________________________________________________________\n", },
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-06-01 10:29:49.494604: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\n",
"2023-06-01 10:29:49.495467: W tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:265] failed call to cuInit: UNKNOWN ERROR (303)\n",
"2023-06-01 10:29:49.496113: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ELLIOT): /proc/driver/nvidia/version does not exist\n",
"2023-06-01 10:29:49.497742: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n",
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" \n",
" max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0 \n",
" 2D) \n",
" \n",
" flatten (Flatten) (None, 1600) 0 \n", " flatten (Flatten) (None, 1600) 0 \n",
"_________________________________________________________________\n", " \n",
" dropout (Dropout) (None, 1600) 0 \n", " dropout (Dropout) (None, 1600) 0 \n",
"_________________________________________________________________\n", " \n",
" dense (Dense) (None, 10) 16010 \n", " dense (Dense) (None, 10) 16010 \n",
" \n",
"=================================================================\n", "=================================================================\n",
"Total params: 34,826\n", "Total params: 34,826\n",
"Trainable params: 34,826\n", "Trainable params: 34,826\n",
@ -142,52 +171,36 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-06-01 10:30:24.247916: W tensorflow/tsl/framework/cpu_allocator_impl.cc:82] Allocation of 169344000 exceeds 10% of free system memory.\n"
]
},
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"422/422 [==============================] - 38s 91ms/step - loss: 0.0556 - accuracy: 0.9826 - val_loss: 0.0412 - val_accuracy: 0.9893\n" "Epoch 1/15\n",
" 99/422 [======>.......................] - ETA: 24s - loss: 0.9593 - accuracy: 0.7040"
] ]
},
{
"data": {
"text/plain": [
"<tensorflow.python.keras.callbacks.History at 0x1a50b35a070>"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
} }
], ],
"source": [ "source": [
"# Uczenie modelu\n", "# Uczenie modelu\n",
"\n",
"batch_size = 128\n",
"epochs = 15\n",
"\n",
"model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n", "model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n",
"\n", "model.fit(x_train, y_train, batch_size=128, epochs=15, validation_split=0.1)"
"model.fit(x_train, y_train, epochs=1, batch_size=batch_size, epochs=epochs, validation_split=0.1)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 10, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test loss: 0.03675819933414459\n",
"Test accuracy: 0.988099992275238\n"
]
}
],
"source": [ "source": [
"# Ewaluacja modelu\n", "# Ewaluacja modelu\n",
"\n", "\n",

View File

@ -38,7 +38,7 @@
"### _Batch gradient descent_\n", "### _Batch gradient descent_\n",
"\n", "\n",
"* Klasyczna wersja metody gradientu prostego\n", "* Klasyczna wersja metody gradientu prostego\n",
"* Obliczamy gradient funkcji kosztu względem całego zbioru treningowego:\n", "* Obliczamy gradient funkcji kosztu względem całego zbioru uczącego:\n",
" $$ \\theta := \\theta - \\alpha \\cdot \\nabla_\\theta J(\\theta) $$\n", " $$ \\theta := \\theta - \\alpha \\cdot \\nabla_\\theta J(\\theta) $$\n",
"* Dlatego może działać bardzo powoli\n", "* Dlatego może działać bardzo powoli\n",
"* Nie można dodawać nowych przykładów na bieżąco w trakcie trenowania modelu (*online learning*)" "* Nie można dodawać nowych przykładów na bieżąco w trakcie trenowania modelu (*online learning*)"
@ -288,8 +288,7 @@
"### Adagrad\n", "### Adagrad\n",
"\n", "\n",
"* “<b>Ada</b>ptive <b>grad</b>ient”\n", "* “<b>Ada</b>ptive <b>grad</b>ient”\n",
"* Adagrad dostosowuje współczynnik uczenia (*learning rate*) do parametrów: zmniejsza go dla cech występujących częściej, a zwiększa dla występujących rzadziej:\n", "* Adagrad dostosowuje współczynnik uczenia (*learning rate*) do parametrów: zmniejsza go dla cech występujących częściej, a zwiększa dla występujących rzadziej\n",
"* Świetny do trenowania na rzadkich (*sparse*) zbiorach danych\n",
"* Wada: współczynnik uczenia może czasami gwałtownie maleć\n", "* Wada: współczynnik uczenia może czasami gwałtownie maleć\n",
"* Wyniki badań pokazują, że często **starannie** dobrane $\\alpha$ daje lepsze wyniki na zbiorze testowym" "* Wyniki badań pokazują, że często **starannie** dobrane $\\alpha$ daje lepsze wyniki na zbiorze testowym"
] ]

View File

@ -15,7 +15,18 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": { "metadata": {
"slideshow": { "slideshow": {
"slide_type": "slide" "slide_type": "subslide"
}
},
"source": [
"Splotowe sieci neuronowe, inaczej konwolucyjne sieci neuronowe (*convolutional neural networks*, CNN, ConvNet)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
} }
}, },
"source": [ "source": [