Przykłady sieci neuronowych
This commit is contained in:
parent
558a89ebde
commit
e2a0a45458
260
lab/Sieci_neuronowe_Keras.ipynb
Normal file
260
lab/Sieci_neuronowe_Keras.ipynb
Normal file
@ -0,0 +1,260 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"slideshow": {
|
||||||
|
"slide_type": "-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"### Uczenie maszynowe — zastosowania\n",
|
||||||
|
"# Sieci neuronowe (Keras)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"[Keras](https://keras.io) to napisany w języku Python interfejs do platformy [TensorFlow](https://www.tensorflow.org), służącej do uczenia maszynowego.\n",
|
||||||
|
"\n",
|
||||||
|
"Aby z niego korzystać, trzeba zainstalować bibliotekę TensorFlow.\n",
|
||||||
|
"\n",
|
||||||
|
"Instrukcja dla skryptów Pythona (pliki *.py*):\n",
|
||||||
|
" * `pip`: https://www.tensorflow.org/install\n",
|
||||||
|
" * `conda`: https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Aby uruchomić TensorFlow w środowisku Jupyter, należy wykonać następujące czynności:\n",
|
||||||
|
"\n",
|
||||||
|
"#### Przed pierwszym uruchomieniem (wystarczy wykonać tylko raz)\n",
|
||||||
|
"\n",
|
||||||
|
"Instalacja biblioteki TensorFlow w środowisku Anaconda:\n",
|
||||||
|
"\n",
|
||||||
|
"1. Uruchom *Anaconda Navigator*\n",
|
||||||
|
"1. Wybierz kafelek *CMD.exe Prompt*\n",
|
||||||
|
"1. Kliknij przycisk *Launch*\n",
|
||||||
|
"1. Pojawi się konsola. Wpisz następujące polecenia, każde zatwierdzając wciśnięciem klawisza Enter:\n",
|
||||||
|
"```\n",
|
||||||
|
"conda create -n tf tensorflow\n",
|
||||||
|
"conda activate tf\n",
|
||||||
|
"conda install pandas matplotlib\n",
|
||||||
|
"jupyter notebook\n",
|
||||||
|
"```\n",
|
||||||
|
"\n",
|
||||||
|
"#### Przed każdym uruchomieniem\n",
|
||||||
|
"\n",
|
||||||
|
"Jeżeli chcemy korzystać z biblioteki TensorFlow, to środowisko Jupyter Notebook należy uruchomić w następujący sposób:\n",
|
||||||
|
"\n",
|
||||||
|
"1. Uruchom *Anaconda Navigator*\n",
|
||||||
|
"1. Wybierz kafelek *CMD.exe Prompt*\n",
|
||||||
|
"1. Kliknij przycisk *Launch*\n",
|
||||||
|
"1. Pojawi się konsola. Wpisz następujące polecenia, każde zatwierdzając wciśnięciem klawisza Enter:\n",
|
||||||
|
"```\n",
|
||||||
|
"conda activate tf\n",
|
||||||
|
"jupyter notebook\n",
|
||||||
|
"```"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przykład implementacji sieci neuronowej do rozpoznawania cyfr ze zbioru [MNIST](https://en.wikipedia.org/wiki/MNIST_database), według https://keras.io/examples/vision/mnist_convnet"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Konieczne importy\n",
|
||||||
|
"\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"from tensorflow import keras\n",
|
||||||
|
"from tensorflow.keras import layers"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"ename": "NameError",
|
||||||
|
"evalue": "name 'keras' is not defined",
|
||||||
|
"output_type": "error",
|
||||||
|
"traceback": [
|
||||||
|
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
|
||||||
|
"\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)",
|
||||||
|
"\u001b[1;32m<ipython-input-1-d9ae37c68de4>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m 5\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 6\u001b[0m \u001b[1;31m# podział danych na zbiory uczący i testowy\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 7\u001b[1;33m \u001b[1;33m(\u001b[0m\u001b[0mx_train\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0my_train\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mx_test\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0my_test\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mkeras\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdatasets\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmnist\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mload_data\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 8\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 9\u001b[0m \u001b[1;31m# skalowanie wartości pikseli do przedziału [0, 1]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;31mNameError\u001b[0m: name 'keras' is not defined"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"# Przygotowanie danych\n",
|
||||||
|
"\n",
|
||||||
|
"num_classes = 10\n",
|
||||||
|
"input_shape = (28, 28, 1)\n",
|
||||||
|
"\n",
|
||||||
|
"# podział danych na zbiory uczący i testowy\n",
|
||||||
|
"(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n",
|
||||||
|
"\n",
|
||||||
|
"# skalowanie wartości pikseli do przedziału [0, 1]\n",
|
||||||
|
"x_train = x_train.astype(\"float32\") / 255\n",
|
||||||
|
"x_test = x_test.astype(\"float32\") / 255\n",
|
||||||
|
"# upewnienie się, że obrazy mają wymiary (28, 28, 1)\n",
|
||||||
|
"x_train = np.expand_dims(x_train, -1)\n",
|
||||||
|
"x_test = np.expand_dims(x_test, -1)\n",
|
||||||
|
"print(\"x_train shape:\", x_train.shape)\n",
|
||||||
|
"print(x_train.shape[0], \"train samples\")\n",
|
||||||
|
"print(x_test.shape[0], \"test samples\")\n",
|
||||||
|
"\n",
|
||||||
|
"# konwersja danych kategorycznych na binarne\n",
|
||||||
|
"y_train = keras.utils.to_categorical(y_train, num_classes)\n",
|
||||||
|
"y_test = keras.utils.to_categorical(y_test, num_classes)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 6,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Model: \"sequential\"\n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"Layer (type) Output Shape Param # \n",
|
||||||
|
"=================================================================\n",
|
||||||
|
"conv2d (Conv2D) (None, 26, 26, 32) 320 \n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 \n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 \n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 \n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"flatten (Flatten) (None, 1600) 0 \n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"dropout (Dropout) (None, 1600) 0 \n",
|
||||||
|
"_________________________________________________________________\n",
|
||||||
|
"dense (Dense) (None, 10) 16010 \n",
|
||||||
|
"=================================================================\n",
|
||||||
|
"Total params: 34,826\n",
|
||||||
|
"Trainable params: 34,826\n",
|
||||||
|
"Non-trainable params: 0\n",
|
||||||
|
"_________________________________________________________________\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"# Stworzenie modelu\n",
|
||||||
|
"\n",
|
||||||
|
"model = keras.Sequential(\n",
|
||||||
|
" [\n",
|
||||||
|
" keras.Input(shape=input_shape),\n",
|
||||||
|
" layers.Conv2D(32, kernel_size=(3, 3), activation=\"relu\"),\n",
|
||||||
|
" layers.MaxPooling2D(pool_size=(2, 2)),\n",
|
||||||
|
" layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"),\n",
|
||||||
|
" layers.MaxPooling2D(pool_size=(2, 2)),\n",
|
||||||
|
" layers.Flatten(),\n",
|
||||||
|
" layers.Dropout(0.5),\n",
|
||||||
|
" layers.Dense(num_classes, activation=\"softmax\"),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"model.summary()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 9,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"422/422 [==============================] - 38s 91ms/step - loss: 0.0556 - accuracy: 0.9826 - val_loss: 0.0412 - val_accuracy: 0.9893\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"<tensorflow.python.keras.callbacks.History at 0x1a50b35a070>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 9,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"# Uczenie modelu\n",
|
||||||
|
"\n",
|
||||||
|
"batch_size = 128\n",
|
||||||
|
"epochs = 15\n",
|
||||||
|
"\n",
|
||||||
|
"model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n",
|
||||||
|
"\n",
|
||||||
|
"model.fit(x_train, y_train, epochs=1, batch_size=batch_size, epochs=epochs, validation_split=0.1)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 10,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Test loss: 0.03675819933414459\n",
|
||||||
|
"Test accuracy: 0.988099992275238\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"# Ewaluacja modelu\n",
|
||||||
|
"\n",
|
||||||
|
"score = model.evaluate(x_test, y_test, verbose=0)\n",
|
||||||
|
"print(\"Test loss:\", score[0])\n",
|
||||||
|
"print(\"Test accuracy:\", score[1])"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"celltoolbar": "Slideshow",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.8.3"
|
||||||
|
},
|
||||||
|
"livereveal": {
|
||||||
|
"start_slideshow_at": "selected",
|
||||||
|
"theme": "amu"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 4
|
||||||
|
}
|
285
lab/Sieci_neuronowe_PyTorch.ipynb
Normal file
285
lab/Sieci_neuronowe_PyTorch.ipynb
Normal file
@ -0,0 +1,285 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"slideshow": {
|
||||||
|
"slide_type": "-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"### Uczenie maszynowe — zastosowania\n",
|
||||||
|
"# Sieci neuronowe (PyTorch)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przykład implementacji sieci neuronowej do rozpoznawania cyfr ze [zbioru MNIST](https://en.wikipedia.org/wiki/MNIST_database), według https://github.com/pytorch/examples/tree/master/mnist"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import torch\n",
|
||||||
|
"import torch.nn as nn\n",
|
||||||
|
"import torch.nn.functional as F\n",
|
||||||
|
"import torch.optim as optim\n",
|
||||||
|
"from torchvision import datasets, transforms\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Ta sieć ma taką samą architekturę, jak sieć z pliku Sieci_neuronowe_Keras.ipynb\n",
|
||||||
|
"\n",
|
||||||
|
"class Net(nn.Module):\n",
|
||||||
|
" \"\"\"W PyTorchu tworzenie sieci neuronowej\n",
|
||||||
|
" polega na zdefiniowaniu klasy, która dziedziczy z nn.Module.\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" \n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
" super().__init__()\n",
|
||||||
|
" \n",
|
||||||
|
" # Warstwy splotowe\n",
|
||||||
|
" self.conv1 = nn.Conv2d(1, 32, 3, 1)\n",
|
||||||
|
" self.conv2 = nn.Conv2d(32, 64, 3, 1)\n",
|
||||||
|
" \n",
|
||||||
|
" # Warstwa dropout\n",
|
||||||
|
" self.dropout = nn.Dropout(0.5)\n",
|
||||||
|
" \n",
|
||||||
|
" # Warstwa liniowa (gęsta)\n",
|
||||||
|
" self.dense = nn.Linear(9216, 10)\n",
|
||||||
|
"\n",
|
||||||
|
" def forward(self, x):\n",
|
||||||
|
" \"\"\"Definiujemy przechodzenie \"do przodu\" jako kolejne przekształcenia wejścia x\"\"\"\n",
|
||||||
|
" x = self.conv1(x)\n",
|
||||||
|
" x = F.relu(x)\n",
|
||||||
|
" x = F.max_pool2d(x, 2)\n",
|
||||||
|
" x = self.conv2(x)\n",
|
||||||
|
" x = F.relu(x)\n",
|
||||||
|
" x = F.max_pool2d(x, 2)\n",
|
||||||
|
" x = torch.flatten(x, 1)\n",
|
||||||
|
" x = self.dropout(x)\n",
|
||||||
|
" x = self.dense(x)\n",
|
||||||
|
" output = F.log_softmax(x, dim=1)\n",
|
||||||
|
" return output\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def train(model, device, train_loader, optimizer, epoch, log_interval, dry_run):\n",
|
||||||
|
" \"\"\"Uczenie modelu\"\"\"\n",
|
||||||
|
" model.train()\n",
|
||||||
|
" for batch_idx, (data, target) in enumerate(train_loader):\n",
|
||||||
|
" data, target = data.to(device), target.to(device) # wrzucenie danych na kartę graficzną (jeśli dotyczy)\n",
|
||||||
|
" optimizer.zero_grad() # wyzerowanie gradientu\n",
|
||||||
|
" output = model(data) # przejście \"do przodu\"\n",
|
||||||
|
" loss = F.nll_loss(output, target) # obliczenie funkcji kosztu\n",
|
||||||
|
" loss.backward() # propagacja wsteczna\n",
|
||||||
|
" optimizer.step() # krok optymalizatora\n",
|
||||||
|
" \n",
|
||||||
|
" # Wypisanie wartości funkcji kosztu na ekran\n",
|
||||||
|
" if batch_idx % log_interval == 0:\n",
|
||||||
|
" print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n",
|
||||||
|
" epoch, batch_idx * len(data), len(train_loader.dataset),\n",
|
||||||
|
" 100. * batch_idx / len(train_loader), loss.item()))\n",
|
||||||
|
" if dry_run:\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def test(model, device, test_loader):\n",
|
||||||
|
" \"\"\"Testowanie modelu\"\"\"\n",
|
||||||
|
" model.eval()\n",
|
||||||
|
" test_loss = 0\n",
|
||||||
|
" correct = 0\n",
|
||||||
|
" with torch.no_grad(): # nie musimy przechowywać gradientów, bo nie wykonujemy propagacji wstecznej\n",
|
||||||
|
" for data, target in test_loader:\n",
|
||||||
|
" data, target = data.to(device), target.to(device) # wrzucenie danych na kartę graficzną (jeśli dotyczy)\n",
|
||||||
|
" output = model(data) # przejście \"do przodu\"\n",
|
||||||
|
" test_loss += F.nll_loss(output, target, reduction='sum').item() # suma kosztów z każdego batcha\n",
|
||||||
|
" pred = output.argmax(dim=1, keepdim=True) # predykcja na podstawie maks. logarytmu prawdopodobieństwa\n",
|
||||||
|
" correct += pred.eq(target.view_as(pred)).sum().item()\n",
|
||||||
|
"\n",
|
||||||
|
" test_loss /= len(test_loader.dataset) # obliczenie kosztu na zbiorze testowym\n",
|
||||||
|
"\n",
|
||||||
|
" print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n",
|
||||||
|
" test_loss, correct, len(test_loader.dataset),\n",
|
||||||
|
" 100. * correct / len(test_loader.dataset)))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def run(\n",
|
||||||
|
" batch_size=64,\n",
|
||||||
|
" test_batch_size=1000,\n",
|
||||||
|
" epochs=14,\n",
|
||||||
|
" lr=1.0,\n",
|
||||||
|
" gamma=0.7,\n",
|
||||||
|
" no_cuda=False,\n",
|
||||||
|
" dry_run=False,\n",
|
||||||
|
" seed=1,\n",
|
||||||
|
" log_interval=10,\n",
|
||||||
|
" save_model=False,\n",
|
||||||
|
" ):\n",
|
||||||
|
" \"\"\"Główna funkcja służąca do uruchamiania przykładu.\n",
|
||||||
|
" \n",
|
||||||
|
" Argumenty:\n",
|
||||||
|
" batch_size - wielkość batcha podczas uczenia (default: 64),\n",
|
||||||
|
" test_batch_size - wielkość batcha podczas testowania (default: 1000)\n",
|
||||||
|
" epochs - liczba epok uczenia (default: 14)\n",
|
||||||
|
" lr - współczynnik uczenia (learning rate) (default: 1.0)\n",
|
||||||
|
" gamma - współczynnik gamma (dla optymalizatora) (default: 0.7)\n",
|
||||||
|
" no_cuda - wyłącza uczenie na karcie graficznej (default: False)\n",
|
||||||
|
" dry_run - szybko (\"na sucho\") sprawdza pojedyncze przejście (default: False)\n",
|
||||||
|
" seed - ziarno generatora liczb pseudolosowych (default: 1)\n",
|
||||||
|
" log_interval - interwał logowania stanu uczenia (default: 10)\n",
|
||||||
|
" save_model - zapisuje bieżący model (default: False)\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" # Ustawienie ziarna generatora liczb pseudolosowych\n",
|
||||||
|
" torch.manual_seed(seed)\n",
|
||||||
|
"\n",
|
||||||
|
" # Ustawienie, czy uczenie ma odbywać się z wykorzystaniem karty graficznej\n",
|
||||||
|
" use_cuda = no_cuda and torch.cuda.is_available()\n",
|
||||||
|
" device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n",
|
||||||
|
"\n",
|
||||||
|
" # Ustawienie parametrów wsadu\n",
|
||||||
|
" train_kwargs = {'batch_size': batch_size}\n",
|
||||||
|
" test_kwargs = {'batch_size': test_batch_size}\n",
|
||||||
|
" if use_cuda:\n",
|
||||||
|
" cuda_kwargs = {'num_workers': 1,\n",
|
||||||
|
" 'pin_memory': True,\n",
|
||||||
|
" 'shuffle': True}\n",
|
||||||
|
" train_kwargs.update(cuda_kwargs)\n",
|
||||||
|
" test_kwargs.update(cuda_kwargs)\n",
|
||||||
|
"\n",
|
||||||
|
" transform=transforms.Compose([\n",
|
||||||
|
" transforms.ToTensor(),\n",
|
||||||
|
" transforms.Normalize((0.1307,), (0.3081,))\n",
|
||||||
|
" ])\n",
|
||||||
|
" \n",
|
||||||
|
" # Załadowanie danych\n",
|
||||||
|
" dataset1 = datasets.MNIST('../data', train=True, download=True,\n",
|
||||||
|
" transform=transform)\n",
|
||||||
|
" dataset2 = datasets.MNIST('../data', train=False,\n",
|
||||||
|
" transform=transform)\n",
|
||||||
|
" \n",
|
||||||
|
" # Klasa DataLoader ułatwia zarządzanie danymi uczącymi\n",
|
||||||
|
" train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)\n",
|
||||||
|
" test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)\n",
|
||||||
|
"\n",
|
||||||
|
" # Stworzenie modelu - klasa Net to zdefiniowana przez nas sieć neuronowa\n",
|
||||||
|
" model = Net().to(device)\n",
|
||||||
|
" \n",
|
||||||
|
" # Wybór metody optymalizacji (tutaj: Adadelta)\n",
|
||||||
|
" optimizer = optim.Adadelta(model.parameters(), lr=lr)\n",
|
||||||
|
"\n",
|
||||||
|
" # Scheduler - kwestie techniczne\n",
|
||||||
|
" scheduler = StepLR(optimizer, step_size=1, gamma=gamma)\n",
|
||||||
|
" \n",
|
||||||
|
" # Pętla główna uczenia i testowania - kolejne epoki\n",
|
||||||
|
" for epoch in range(1, epochs + 1):\n",
|
||||||
|
" train(model, device, train_loader, optimizer, epoch, log_interval, dry_run)\n",
|
||||||
|
" test(model, device, test_loader)\n",
|
||||||
|
" scheduler.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Opcjonalne zapisanie wytrenowanego modelu do pliku\n",
|
||||||
|
" if save_model:\n",
|
||||||
|
" torch.save(model.state_dict(), \"mnist_cnn.pt\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"**Uwaga**: uruchomienie tego przykładu długo trwa. Żeby trwało krócej, można zmniejszyć liczbę epok."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../data\\MNIST\\raw\\train-images-idx3-ubyte.gz\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"application/vnd.jupyter.widget-view+json": {
|
||||||
|
"model_id": "02f11492b1ff4fdfa05d7dd086c5989b",
|
||||||
|
"version_major": 2,
|
||||||
|
"version_minor": 0
|
||||||
|
},
|
||||||
|
"text/plain": [
|
||||||
|
"HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"ename": "HTTPError",
|
||||||
|
"evalue": "HTTP Error 503: Service Unavailable",
|
||||||
|
"output_type": "error",
|
||||||
|
"traceback": [
|
||||||
|
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
|
||||||
|
"\u001b[1;31mHTTPError\u001b[0m Traceback (most recent call last)",
|
||||||
|
"\u001b[1;32m<ipython-input-3-c80278f8b80c>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mrun\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mepochs\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;36m5\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdry_run\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;32mTrue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
|
||||||
|
"\u001b[1;32m<ipython-input-2-55aa66586ee8>\u001b[0m in \u001b[0;36mrun\u001b[1;34m(batch_size, test_batch_size, epochs, lr, gamma, no_cuda, dry_run, seed, log_interval, save_model)\u001b[0m\n\u001b[0;32m 125\u001b[0m \u001b[0mtransforms\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mNormalize\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;36m0.1307\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m(\u001b[0m\u001b[1;36m0.3081\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 126\u001b[0m ])\n\u001b[1;32m--> 127\u001b[1;33m dataset1 = datasets.MNIST('../data', train=True, download=True,\n\u001b[0m\u001b[0;32m 128\u001b[0m transform=transform)\n\u001b[0;32m 129\u001b[0m dataset2 = datasets.MNIST('../data', train=False,\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\site-packages\\torchvision\\datasets\\mnist.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, root, train, transform, target_transform, download)\u001b[0m\n\u001b[0;32m 77\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 78\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mdownload\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 79\u001b[1;33m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdownload\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 80\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 81\u001b[0m \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_check_exists\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\site-packages\\torchvision\\datasets\\mnist.py\u001b[0m in \u001b[0;36mdownload\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m 144\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0murl\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmd5\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mresources\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 145\u001b[0m \u001b[0mfilename\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0murl\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrpartition\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'/'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m2\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 146\u001b[1;33m \u001b[0mdownload_and_extract_archive\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdownload_root\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mraw_folder\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfilename\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mfilename\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmd5\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mmd5\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 147\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 148\u001b[0m \u001b[1;31m# process and save as torch files\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\site-packages\\torchvision\\datasets\\utils.py\u001b[0m in \u001b[0;36mdownload_and_extract_archive\u001b[1;34m(url, download_root, extract_root, filename, md5, remove_finished)\u001b[0m\n\u001b[0;32m 254\u001b[0m \u001b[0mfilename\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mos\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mbasename\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 255\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 256\u001b[1;33m \u001b[0mdownload_url\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdownload_root\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfilename\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmd5\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 257\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 258\u001b[0m \u001b[0marchive\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mos\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdownload_root\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfilename\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\site-packages\\torchvision\\datasets\\utils.py\u001b[0m in \u001b[0;36mdownload_url\u001b[1;34m(url, root, filename, md5)\u001b[0m\n\u001b[0;32m 82\u001b[0m )\n\u001b[0;32m 83\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 84\u001b[1;33m \u001b[1;32mraise\u001b[0m \u001b[0me\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 85\u001b[0m \u001b[1;31m# check integrity of downloaded file\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 86\u001b[0m \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0mcheck_integrity\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfpath\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmd5\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\site-packages\\torchvision\\datasets\\utils.py\u001b[0m in \u001b[0;36mdownload_url\u001b[1;34m(url, root, filename, md5)\u001b[0m\n\u001b[0;32m 68\u001b[0m \u001b[1;32mtry\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 69\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'Downloading '\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0murl\u001b[0m \u001b[1;33m+\u001b[0m \u001b[1;34m' to '\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mfpath\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 70\u001b[1;33m urllib.request.urlretrieve(\n\u001b[0m\u001b[0;32m 71\u001b[0m \u001b[0murl\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfpath\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 72\u001b[0m \u001b[0mreporthook\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mgen_bar_updater\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36murlretrieve\u001b[1;34m(url, filename, reporthook, data)\u001b[0m\n\u001b[0;32m 245\u001b[0m \u001b[0murl_type\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mpath\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0m_splittype\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 246\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 247\u001b[1;33m \u001b[1;32mwith\u001b[0m \u001b[0mcontextlib\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mclosing\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murlopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdata\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mfp\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 248\u001b[0m \u001b[0mheaders\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfp\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0minfo\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 249\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36murlopen\u001b[1;34m(url, data, timeout, cafile, capath, cadefault, context)\u001b[0m\n\u001b[0;32m 220\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 221\u001b[0m \u001b[0mopener\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0m_opener\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 222\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0mopener\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0murl\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdata\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 223\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 224\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0minstall_opener\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mopener\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36mopen\u001b[1;34m(self, fullurl, data, timeout)\u001b[0m\n\u001b[0;32m 529\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mprocessor\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mprocess_response\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mget\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mprotocol\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 530\u001b[0m \u001b[0mmeth\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mgetattr\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mprocessor\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmeth_name\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 531\u001b[1;33m \u001b[0mresponse\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmeth\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mreq\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mresponse\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 532\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 533\u001b[0m \u001b[1;32mreturn\u001b[0m \u001b[0mresponse\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36mhttp_response\u001b[1;34m(self, request, response)\u001b[0m\n\u001b[0;32m 638\u001b[0m \u001b[1;31m# request was successfully received, understood, and accepted.\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 639\u001b[0m \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;33m(\u001b[0m\u001b[1;36m200\u001b[0m \u001b[1;33m<=\u001b[0m \u001b[0mcode\u001b[0m \u001b[1;33m<\u001b[0m \u001b[1;36m300\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 640\u001b[1;33m response = self.parent.error(\n\u001b[0m\u001b[0;32m 641\u001b[0m 'http', request, response, code, msg, hdrs)\n\u001b[0;32m 642\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36merror\u001b[1;34m(self, proto, *args)\u001b[0m\n\u001b[0;32m 567\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mhttp_err\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 568\u001b[0m \u001b[0margs\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mdict\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'default'\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'http_error_default'\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0morig_args\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 569\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_call_chain\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 570\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 571\u001b[0m \u001b[1;31m# XXX probably also want an abstract factory that knows when it makes\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36m_call_chain\u001b[1;34m(self, chain, kind, meth_name, *args)\u001b[0m\n\u001b[0;32m 500\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mhandler\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mhandlers\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 501\u001b[0m \u001b[0mfunc\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mgetattr\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mhandler\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmeth_name\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 502\u001b[1;33m \u001b[0mresult\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfunc\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 503\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mresult\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 504\u001b[0m \u001b[1;32mreturn\u001b[0m \u001b[0mresult\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;32m~\\anaconda3\\lib\\urllib\\request.py\u001b[0m in \u001b[0;36mhttp_error_default\u001b[1;34m(self, req, fp, code, msg, hdrs)\u001b[0m\n\u001b[0;32m 647\u001b[0m \u001b[1;32mclass\u001b[0m \u001b[0mHTTPDefaultErrorHandler\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mBaseHandler\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 648\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mhttp_error_default\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mreq\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfp\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mcode\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmsg\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mhdrs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 649\u001b[1;33m \u001b[1;32mraise\u001b[0m \u001b[0mHTTPError\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mreq\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfull_url\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mcode\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mmsg\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mhdrs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfp\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 650\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 651\u001b[0m \u001b[1;32mclass\u001b[0m \u001b[0mHTTPRedirectHandler\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mBaseHandler\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
|
||||||
|
"\u001b[1;31mHTTPError\u001b[0m: HTTP Error 503: Service Unavailable"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"run(epochs=5)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"celltoolbar": "Slideshow",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.8.3"
|
||||||
|
},
|
||||||
|
"livereveal": {
|
||||||
|
"start_slideshow_at": "selected",
|
||||||
|
"theme": "amu"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 4
|
||||||
|
}
|
Loading…
Reference in New Issue
Block a user