713 lines
133 KiB
Plaintext
713 lines
133 KiB
Plaintext
|
{
|
|||
|
"cells": [
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 1,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"%matplotlib inline"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"\n",
|
|||
|
"Training a Classifier\n",
|
|||
|
"=====================\n",
|
|||
|
"\n",
|
|||
|
"This is it. You have seen how to define neural networks, compute loss and make\n",
|
|||
|
"updates to the weights of the network.\n",
|
|||
|
"\n",
|
|||
|
"Now you might be thinking,\n",
|
|||
|
"\n",
|
|||
|
"What about data?\n",
|
|||
|
"----------------\n",
|
|||
|
"\n",
|
|||
|
"Generally, when you have to deal with image, text, audio or video data,\n",
|
|||
|
"you can use standard python packages that load data into a numpy array.\n",
|
|||
|
"Then you can convert this array into a ``torch.*Tensor``.\n",
|
|||
|
"\n",
|
|||
|
"- For images, packages such as Pillow, OpenCV are useful\n",
|
|||
|
"- For audio, packages such as scipy and librosa\n",
|
|||
|
"- For text, either raw Python or Cython based loading, or NLTK and\n",
|
|||
|
" SpaCy are useful\n",
|
|||
|
"\n",
|
|||
|
"Specifically for vision, we have created a package called\n",
|
|||
|
"``torchvision``, that has data loaders for common datasets such as\n",
|
|||
|
"Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,\n",
|
|||
|
"``torchvision.datasets`` and ``torch.utils.data.DataLoader``.\n",
|
|||
|
"\n",
|
|||
|
"This provides a huge convenience and avoids writing boilerplate code.\n",
|
|||
|
"\n",
|
|||
|
"For this tutorial, we will use the CIFAR10 dataset.\n",
|
|||
|
"It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,\n",
|
|||
|
"‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of\n",
|
|||
|
"size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.\n",
|
|||
|
"\n",
|
|||
|
".. figure:: /_static/img/cifar10.png\n",
|
|||
|
" :alt: cifar10\n",
|
|||
|
"\n",
|
|||
|
" cifar10\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"Training an image classifier\n",
|
|||
|
"----------------------------\n",
|
|||
|
"\n",
|
|||
|
"We will do the following steps in order:\n",
|
|||
|
"\n",
|
|||
|
"1. Load and normalizing the CIFAR10 training and test datasets using\n",
|
|||
|
" ``torchvision``\n",
|
|||
|
"2. Define a Convolutional Neural Network\n",
|
|||
|
"3. Define a loss function\n",
|
|||
|
"4. Train the network on the training data\n",
|
|||
|
"5. Test the network on the test data\n",
|
|||
|
"\n",
|
|||
|
"1. Loading and normalizing CIFAR10\n",
|
|||
|
"^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
|
|||
|
"\n",
|
|||
|
"Using ``torchvision``, it’s extremely easy to load CIFAR10.\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 2,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"import torch\n",
|
|||
|
"import torchvision\n",
|
|||
|
"import torchvision.transforms as transforms"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The output of torchvision datasets are PILImage images of range [0, 1].\n",
|
|||
|
"We transform them to Tensors of normalized range [-1, 1].\n",
|
|||
|
"<div class=\"alert alert-info\"><h4>Note</h4><p>If running on Windows and you get a BrokenPipeError, try setting\n",
|
|||
|
" the num_worker of torch.utils.data.DataLoader() to 0.</p></div>\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 3,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"application/vnd.jupyter.widget-view+json": {
|
|||
|
"model_id": "e66da546b761414ca2166df80ac0eebb",
|
|||
|
"version_major": 2,
|
|||
|
"version_minor": 0
|
|||
|
},
|
|||
|
"text/plain": [
|
|||
|
"HBox(children=(HTML(value=''), FloatProgress(value=1.0, bar_style='info', layout=Layout(width='20px'), max=1.0…"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {},
|
|||
|
"output_type": "display_data"
|
|||
|
},
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"Extracting ./data/cifar-10-python.tar.gz to ./data\n",
|
|||
|
"Files already downloaded and verified\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"transform = transforms.Compose(\n",
|
|||
|
" [transforms.ToTensor(),\n",
|
|||
|
" transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n",
|
|||
|
"\n",
|
|||
|
"trainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n",
|
|||
|
" download=True, transform=transform)\n",
|
|||
|
"trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n",
|
|||
|
" shuffle=True, num_workers=2)\n",
|
|||
|
"\n",
|
|||
|
"testset = torchvision.datasets.CIFAR10(root='./data', train=False,\n",
|
|||
|
" download=True, transform=transform)\n",
|
|||
|
"testloader = torch.utils.data.DataLoader(testset, batch_size=4,\n",
|
|||
|
" shuffle=False, num_workers=2)\n",
|
|||
|
"\n",
|
|||
|
"classes = ('plane', 'car', 'bird', 'cat',\n",
|
|||
|
" 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Let us show some of the training images, for fun.\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 4,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAB5CAYAAAAgYXpDAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAACsQklEQVR4nOz9S6gtW5fnh/3GfETEWmvvfR733u9VmZUpVFapLBssI6yGOwXC4IaheoVlMBIIsuWGwQ0V7pjqVctgcCvBwhIYWwIbpIbAGIEw7hjJwkaU65WVVZmV3+s+z36stSJiPoYbY85Ya+9z7pf3y8zyraTOvJy7916PWLFmzBhzjP/4j/8QVeXj+Dg+jo/j4/jzN9z3fQIfx8fxcXwcH8efbHw04B/Hx/FxfBx/TsdHA/5xfBwfx8fx53R8NOAfx8fxcXwcf07HRwP+cXwcH8fH8ed0fDTgH8fH8XF8HH9Ox5/KgIvI/1BE/p6I/J6I/I0/q5P6OD6Oj+Pj+Dj++CF/Uh64iHjg7wP/A+CPgP8C+DdV9f/7Z3d6H8fH8XF8HB/Ht43wp3jvfw/4PVX9fQAR+T8Bfw34VgO+3+/19evXf4qP/Dg+jo/j4/jnb/z85z//UlU/e/n4n8aA/wXgn1z9/UfAv/6r3vD69Wt+53d+50/xkR/Hx/FxfBz//I2/+Tf/5h986PF/6klMEfkdEfkvReS/PJ1O/7Q/7uP4OD6Oj+Ofm/GnMeA/BX7z6u/faI89G6r6u6r6r6nqv7bf7/8UH/dxfBwfx8fxcVyPPw2E8l8A/w0R+Rcww/0/Bv4nv84BUkqcns6UWlFABQRAFAEEQVRwAk5AVclaURRE7B+CiOBEGGMkeI8oQAWFqhVUEedxzlFqZV5XaqkUVYpePgvsc6Sdn6ii2GtqS/ZWUUQEEWcfb2cJAmIPUGulqiKqoOAFbgdPdA6H7ZpFlaVCVWXOhbXYHFQAEULwOGfHU23P1YqqcrPbc9gftnNWVR7u7zkej7DlpO2c7bSV61S19P8LoJdnnXPbd5DtMPastrmwo/XjbK9679jSDi9yeUNtx7l+tevzptBWwdV5/9mMfn3693v16hWffvopAOu6UkrZXjuMI6/evCGESJucqy8mz456PfT6tz/u3K+/nPR3vT+Xz47+oWN+21v0+onn1+xDb7frJM8evL62Iu+/5+WHPz09cP/u3bPr65zHOd8+v60e1T92euTZCy6v/+6EC3nvvF8+K87h+lqX7SIAtPVn9qN/pt3zL+6N9z/y+Vlv37XZljYfim73crcV7QtiK+G7L/4/sQFX1Swi/zPg/wp44N9T1b/96xzj+HjiH/3eHzIvC0mgAM4pzlUEiBpwOAanTA6yVh7LQtYKIYD3OBxBPNF7fvjmDXeHA1ILUjJaKzmv1FqJ40Qcd5yXhZ9//iXnZeGUM3MuOAQvdkEHB14E0YqrlVIrp7ySaqWIUlxFnCPGsS0Ch4jDOUcIHhTmdSaljFTFVWXvHX/5zY43U8ADowhzUR7PmbkoP3868+V5oQArgjjHzc2OaYpUrRTN1FqZlwWtlb/0m/8C/9Jv/6VtQZVS+P1/+A/5vX/wD2jWFhTWnCm1tI3haimJbJuWLRl7LoRA8H6bD1BqLbZx1kpuv/cjCeDa4qztmnoxI+kdRN82N/sQsiprM5bO2XPBOYJzKEquhapKKmXb1L/dSn230c4AcY5xGAgh8K/+d/9V/sq//FdA4PPPP39mwO9evea/86/969ze3V1tWrJt1oq7sgyXc+vGyX4oVL088979aI9d21n9NmvTjvee8fpVxgThYhP6L8+3caE5KwJOBO9cO6w9KH2NiG2ycOXc9Ndcnd/f+zt/m//Pf/VfknPePiEOI8OwQ1UpWlGFSn3/u+j2v8tcafMvmlFTVUrVzTD2LyHiXhzoMi/SvQguG1LfzIfm7Dlx23dvy5SSEillVCul2JoPIeC9b2u+HXPb3GX7LBWb5aoV1cJlKQjjtGccd+RaWdaVUivLuqDZPqvWiqB4Lch2R/3q8afxwFHV/xT4T/+k7y+1Ms8z57MZ8CzgXMW7iiBkAk4d6kC83eDnMpO0IiFCCHhxRBeoPlByRkuGUqAZcM0Jrc3g14rmwrLMzOeZ05o55YxDCM7jBIpzRMczA35OC6kUklOyNAM+ZJxzOPGbAY/RpvM8z6SckFJxpULwlD3gIk5sg3BZ0aVQcmU5nzmeZgqwKOAc4pWiA5VyMeDzTCmFJaXrtYqqcj6fuX93bwu/Gdk1JUqtVPRbDThcjHsM0Tz/awNeCqqVXCuplr43bMfqBrw/5ppn40UYQvsENYOSa2WtBfO87bnoPcF7VPvxlTVnUrks4Mt98gGP/+qxl3ZS2oPSzmscR2KMrOtKaNfKuecoog+ew+0Nt69ebYazG2+zGK79tL+fGeE+OapoM+DyAa+xGzC7r+1CfsjnuvY8/1QGvF+zF8fw3YA7aRsvF+MstvWJXAx3v2aXaLNfe2Wcdi/ORqlVyaU249vWoV55nNuXvBhvuXpM9PLdFTuGfZ3a1v9lo3l/CnoU+NxwX76TXUd/FVn1z0q5vGfAu3W3gPJqk+hHb7tcdS1S13JxniysJlSlqlCqkop9nzUVUjZbVWtBRJk87bz++PGnMuB/2uFEiNGTi93AoDgxj0zEEWXA4wmSCSSqA822DoP3xGEk+sAhTgTvGWPAoYgoQkUcjONoHuE44aeBUostVoRaCnld7WJ6JTghxsAuRrxWghbzHBwspUBNpKJQIa0FkYr32kIjO/9+Vb33tvumgta67dzBeQZvm9KhRlxRbiucvWcp1TYg7AbRquAEHwJSKzFGvPd4/37qIjfDLgK++QjSbkyuwrT3DMXVIq+1krMiCKXBGqot1LveMZ4dpUMsl7+0+Q+p2lLvN8EFshJocJIZc0dV0OaB1++AQrz3NbZN6WITLpCPbjdnVaWUSs7Z5viFUVO1TavkTPf8zAt1V/bpYswvN/HVL90Q9j+2j7iaK72et8s5PAMY+muq9i909fwV5PFiY1OVzcjo9jl6MUTtWEUuRujyHd6HBLdvshnFa+/W5nBZ8/PvocrPfvYLvvrmkQ5DKlfRlSq1GeJutB3g1Naja4+VWprRh7J54hcvXq/Ph8ux5MX3uKwP+30aR4Zg99MQo53bmii1sC4r8zIjeokUxxiJMeJEtijVtajd1le1S+3tC5VaWkQJpZrRn/a3jLsDKWWezmdyyZzPZ9a0Ukomp5XdOPDf/Eu/xSev7/gu43s14CLgvSN4ZzhzVbxz5qGKI7iIF09AcWRcgW4ivPfEEBjjwGEyAx69a4bScCQnwjRYqMQwIEMkrmm72ForJRfU2WsrjuA8ow8ElAFH0cqiFZyQUoViF4VSscUrOA9OBRo+DubZVWrDutriEsE7IXpPFWGKDjzsS2UvgsuZZTUMroePAoh3ZoyLxxXz9l+6HaVWUsmG67mrBS3NOyjPX99N28VD6cbLnnnx8mcG9eJzy/ueXYMG6vamBo1U3UJUCxO7F9s3EGmY4K+Pf29G6Oom1nYiz4w43Yi3cPUDBpxmwGvNV4+JRYEIUJsXDmCR4jMz1zfE587odqaKeed9O+zGtZ/jtkWqXhld3R57OfTKE7VjXGCxS0CgL47Hdky5MuiXCb2snevPfIYDXxl4VUi5PN/WVfnq66/5h//oj8wLbee05Ewu5hgVLZvhFgSv4NXm07fTyrVs3ntux83FNvr+H9CgTNmOBxdjvd0DWMTonGM3jgzRILVxGFBV5nkmpcSyLJzPZ4NUQ8A3+M0MvmMIzZA7j3eeWgulr5cIOMglk5oBzxUUx7S/ZZgOrCnxeDyScuZ0PrGmlZxWlmXm1e2B3/rxZ38+DDjSMCNRnCjegXcQGrYcnMPhcFWQSltYdnm8OKIPROcJzhKEtvuatxucJQE7rluAkjM5Z0rJlJwpuVBzQTxtxQjBO4YYCFSCgXZmMGu1RKgPXN1T212o2kOli6erPeTT91NUguGOHuynk7bb2yFrVUo2GMElrm78X23dupHk+jMvDz4LvZ9DEh+O2Z6F+9tj/Rv
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 432x288 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
},
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
" deer deer ship ship\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"import matplotlib.pyplot as plt\n",
|
|||
|
"import numpy as np\n",
|
|||
|
"\n",
|
|||
|
"# functions to show an image\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"def imshow(img):\n",
|
|||
|
" img = img / 2 + 0.5 # unnormalize\n",
|
|||
|
" npimg = img.numpy()\n",
|
|||
|
" plt.imshow(np.transpose(npimg, (1, 2, 0)))\n",
|
|||
|
" plt.show()\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"# get some random training images\n",
|
|||
|
"dataiter = iter(trainloader)\n",
|
|||
|
"images, labels = dataiter.next()\n",
|
|||
|
"\n",
|
|||
|
"# show images\n",
|
|||
|
"imshow(torchvision.utils.make_grid(images))\n",
|
|||
|
"# print labels\n",
|
|||
|
"print(' '.join('%5s' % classes[labels[j]] for j in range(4)))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"2. Define a Convolutional Neural Network\n",
|
|||
|
"^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
|
|||
|
"Copy the neural network from the Neural Networks section before and modify it to\n",
|
|||
|
"take 3-channel images (instead of 1-channel images as it was defined).\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 5,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"import torch.nn as nn\n",
|
|||
|
"import torch.nn.functional as F\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"class Net(nn.Module):\n",
|
|||
|
" def __init__(self):\n",
|
|||
|
" super(Net, self).__init__()\n",
|
|||
|
" self.conv1 = nn.Conv2d(3, 6, 5)\n",
|
|||
|
" self.pool = nn.MaxPool2d(2, 2)\n",
|
|||
|
" self.conv2 = nn.Conv2d(6, 16, 5)\n",
|
|||
|
" self.fc1 = nn.Linear(16 * 5 * 5, 120)\n",
|
|||
|
" self.fc2 = nn.Linear(120, 84)\n",
|
|||
|
" self.fc3 = nn.Linear(84, 10)\n",
|
|||
|
"\n",
|
|||
|
" def forward(self, x):\n",
|
|||
|
" x = self.pool(F.relu(self.conv1(x)))\n",
|
|||
|
" x = self.pool(F.relu(self.conv2(x)))\n",
|
|||
|
" x = x.view(-1, 16 * 5 * 5)\n",
|
|||
|
" x = F.relu(self.fc1(x))\n",
|
|||
|
" x = F.relu(self.fc2(x))\n",
|
|||
|
" x = self.fc3(x)\n",
|
|||
|
" return x\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"net = Net()"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"3. Define a Loss function and optimizer\n",
|
|||
|
"^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
|
|||
|
"Let's use a Classification Cross-Entropy loss and SGD with momentum.\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 6,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"import torch.optim as optim\n",
|
|||
|
"\n",
|
|||
|
"criterion = nn.CrossEntropyLoss()\n",
|
|||
|
"optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"4. Train the network\n",
|
|||
|
"^^^^^^^^^^^^^^^^^^^^\n",
|
|||
|
"\n",
|
|||
|
"This is when things start to get interesting.\n",
|
|||
|
"We simply have to loop over our data iterator, and feed the inputs to the\n",
|
|||
|
"network and optimize.\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 7,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"[1, 2000] loss: 2.215\n",
|
|||
|
"[1, 4000] loss: 1.864\n",
|
|||
|
"[1, 6000] loss: 1.681\n",
|
|||
|
"[1, 8000] loss: 1.594\n",
|
|||
|
"[1, 10000] loss: 1.520\n",
|
|||
|
"[1, 12000] loss: 1.470\n",
|
|||
|
"[2, 2000] loss: 1.418\n",
|
|||
|
"[2, 4000] loss: 1.395\n",
|
|||
|
"[2, 6000] loss: 1.381\n",
|
|||
|
"[2, 8000] loss: 1.360\n",
|
|||
|
"[2, 10000] loss: 1.316\n",
|
|||
|
"[2, 12000] loss: 1.297\n",
|
|||
|
"Finished Training\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"for epoch in range(2): # loop over the dataset multiple times\n",
|
|||
|
"\n",
|
|||
|
" running_loss = 0.0\n",
|
|||
|
" for i, data in enumerate(trainloader, 0):\n",
|
|||
|
" # get the inputs; data is a list of [inputs, labels]\n",
|
|||
|
" inputs, labels = data\n",
|
|||
|
"\n",
|
|||
|
" # zero the parameter gradients\n",
|
|||
|
" optimizer.zero_grad()\n",
|
|||
|
"\n",
|
|||
|
" # forward + backward + optimize\n",
|
|||
|
" outputs = net(inputs)\n",
|
|||
|
" loss = criterion(outputs, labels)\n",
|
|||
|
" loss.backward()\n",
|
|||
|
" optimizer.step()\n",
|
|||
|
"\n",
|
|||
|
" # print statistics\n",
|
|||
|
" running_loss += loss.item()\n",
|
|||
|
" if i % 2000 == 1999: # print every 2000 mini-batches\n",
|
|||
|
" print('[%d, %5d] loss: %.3f' %\n",
|
|||
|
" (epoch + 1, i + 1, running_loss / 2000))\n",
|
|||
|
" running_loss = 0.0\n",
|
|||
|
"\n",
|
|||
|
"print('Finished Training')"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Let's quickly save our trained model:\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 8,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"PATH = './cifar_net.pth'\n",
|
|||
|
"torch.save(net.state_dict(), PATH)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"See `here <https://pytorch.org/docs/stable/notes/serialization.html>`_\n",
|
|||
|
"for more details on saving PyTorch models.\n",
|
|||
|
"\n",
|
|||
|
"5. Test the network on the test data\n",
|
|||
|
"^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
|
|||
|
"\n",
|
|||
|
"We have trained the network for 2 passes over the training dataset.\n",
|
|||
|
"But we need to check if the network has learnt anything at all.\n",
|
|||
|
"\n",
|
|||
|
"We will check this by predicting the class label that the neural network\n",
|
|||
|
"outputs, and checking it against the ground-truth. If the prediction is\n",
|
|||
|
"correct, we add the sample to the list of correct predictions.\n",
|
|||
|
"\n",
|
|||
|
"Okay, first step. Let us display an image from the test set to get familiar.\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 9,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAB5CAYAAAAgYXpDAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAACofElEQVR4nOz9S6ilW7bnh/3GfHzftx77ERHnnXkf5VJVuWGwDYVtcKdAGNwwVK+wDEYGwW25YXBDhTumetUyGNy6YGEJjC2BDVJDYIxAGHdcJfmBLZWkqro3771ZefK8ImI/1lrfY8453BhzfmvtiDiZcTLv1alLxcxcZ+9Yez2+x5xjjvEf//Efoqp8GB/Gh/FhfBh/+Yb7sQ/gw/gwPowP48P4zcYHA/5hfBgfxofxl3R8MOAfxofxYXwYf0nHBwP+YXwYH8aH8Zd0fDDgH8aH8WF8GH9JxwcD/mF8GB/Gh/GXdPxWBlxE/vsi8p+LyD8Rkb/753VQH8aH8WF8GB/Grx/ym/LARcQD/wXw3wN+DvxD4F9R1f/0z+/wPowP48P4MD6M7xvht3jvfwv4J6r6RwAi8n8E/jbwvQZ8u93q7e3tb/GVH8aH8WF8GP/ijS+//PJbVf34zed/GwP+E+DPLv79c+C//avecHt7yx/8wR/8Fl/5YXwYH8aH8S/e+Ht/7+/9ybue/wtPYorIH4jIfyQi/9HxePyL/roP48P4MD6Mf2HGb2PA/xnwOxf//ml97slQ1T9U1b+pqn9zu93+Fl/3YXwYH8aH8WFcjt8GQvmHwF8Tkb+CGe7/IfA/+iEf4HQm5keEvD4ngIg8eZ2i2P8VWtJVZH3duxOx2l5W/+nqp1/+Tc+/t79dfHX73KefL09+b386f9LTQ2jPteNQ1fN7tH2eXnyqnv+rUOrrtL0XkLjHdfv1WESEm5sb9vv9k+N98zr+aOMdt0ffcWjrU++83O+6x7/d+S3LwqtXr5jneX3OhY64vUFcWL+zlELJqd47e4QQiLFDRHDOISKInI+oXXt77vL385Ffzmfn7D1S//hklrV5Xgq5lPp6d/H69RvW74EnM+mdY10NqpSiF98riJN63Pp0HaiCCE7kYnHBw8MDd69fPVkrIXi894hzON8h4hDqdarXDaDUj1XNlHp+8o6jb+fq6meAnD3QXzEVbC29fR30jWsj6wW4vJpvjHc++auJIFJtlV3PyxlQz0NAS6LkmVIKx9PEkvL3f+DF+I0NuKomEfmfAv8XwAP/hqr+Jz/kM2J+4Hb8z/A6Imom0IngvRlbqTO+5Ewu2SaaFlDFeY93DkUp1cJps3qoGec6x0RAS4ASARAzizifcS7bpMKd7520hVpQhVIURVGVs9EVO8ZS6gRESO+w4m9OklIKOZf6XtdeVCesrj+zKkUhKSwFiipLtufis7+Oe/5fXReQ956/9tf+Gn/jb/yNuhjL+n1/3kb8vT9P3/79yVPrdaZZM9oUf7pq20VVu4/2ZPvjr1s77zzuZmTu7u74B//gH/DNN9+sfwuba25+779OGK6gzrVlnjgeH20epkQphd3VFbe3z/Desxl6vPc4EZyzRemds5/V2DkRfDW0TmzDNoOVcAJd5/FOcFKNOVJ/YoZPhGVZGMcTAF3XEUJARPDeA+Bwq3G0y6gUzeu1e3I91msipJRIS7LPqBtDM75FlZxt7WnOlJLxzq/fLeJQ4B/9p/8//l//8T8kpbRe782mZ7sb8HFDt32B85EoHi+OEAJ916ECc1JSUdIyscwn2vptm1ybIs7ZGg0u4MXhRew6r1NC3zaw2q7D00mo9d5q/Q7qZ1ENbbs275ru5vgBUudktRXvmnN2Let9Eo8TX51Jb5/vHTjI8yPL6VvGceSPf/41dw/vBzf/Nh44qvrvA//+b/p+R8aXI6EczXCp4pwjiKvegAMViiZcNeC52IT0dSKoUg24XmyypV7cOnFFoES0RLMHUhAULwknbxhwFFEoFLQUFMjFJoAiqLZd1G52KULR5kWYgT/7+WfvZfXISoGc7XNKjQq0PDXgat8nasdS9xs0FfuOPD25jrZYNtzc3PyFGvAf9FnvZcAvNrjVgF8aZa0GHKC804AL8j0R2Pcfd3t9znk1fm045wnDnri9hlJAC+oiPtl9K7IgJeO7HXFzRYyBYbPB+2aAzQh7b8Y0OGdGWYRQjcNqwEuilAXnoO883ju8sG4CTtpPM6rzPBOiOSHDMBCCxzm/Gl1/4eEigmohawJV6pJ6w8OUdWNY5gUwZ6AZnRACpRRSsuij5ISWgneeGOMTA77ZbHnT2jlnhjrESNdvcL4jVuMbQ2AYejuKVAhFmZ2vN7jUtXs2sHbd7HijC/W6OqJUoytt4anNrYspoaWcDXidVi36af+DGkm5ZsDd07nzJCpqzmE7urrO9XwJVq9bpF4rV22WBxxooO7wiBNSKMzag5Y1Mnmf8VsZ8N92KJDXRZuBjMOBegTBW7AE6qqBKzhnV6hulva7grYQVqhecaneXZ2sCqL2KeapqIVx5grhyNXRqx4LZQ27tBrngqPgUBG03ogsQlHz6VOxz5eGeaDVOIOvi1LUtgoVECdtvqLavO9cPdJqtS+mTfOu3u0V2AT8i9J3/8EbwcXLz9ej/riIJNtZIyDq2pfVUz8vyjNI0M7v/CEi32/E33Xc7fVtgT35mxO66OliqHOgIATSHMgJKAtZleCU6CF4IQYhBDPUvkVF1Xnwoqthj74ZcDunkhVKwSEEJ/j6OBtwO09X31+8ELwZTCftCmh9sH5PmyOltE2/rJdSxK1rqH3C5XuccxYtOLcakvZqV7/JOzPAcgGjeCe8eaUVhxIodBTZgOvILoILeB8o0pvxdebAFJQlzahm0GT3va7zdo6gFFdIde0mV4/dW1CspUbpF/c+5UzO+cKzPhtYcwrr0bqyzjIqrCuuXiNn181sQWknCG/MR6nHE0MkdhHnPD5063Vtnr02Dz84M2Y6oVNPCblG9+83fnQDXhAygrmZBVXFqXnfvl2UZuQr/sVFmNgwpAvHokIm1bfTs7dRt4PV0EtbBHJpcHM9KjOgZhfcukzsWaGIR5sBr0Y8aTU8erEBqGWKnch6Nq5OHK0eqErDf5rRNs9T9ewfUCef4/uN0qVB+vM25L+pJ98W3eW/1yDm8rl6fy/B8QahtejkzQ3t6eb2biP+fcf9LuPdng/eE0NzHgTNnil4BCXNtvk6B94LwUPwjhjqvVkf1YA7M7bembG/PHUpoKKrh+6dqwb8cm42g14NuTvPe5F2bXX9++oo1O9JbcbWS+dQ/LpYbE43T98cmmasqgHXBvfousbascqFp+jeeZ0dqt6MuHQgPcV1OAkUiajr7TNdW2sTS/ZoUVsK2HVuUFKbRypQxKCo0jY7BCm2abU8QTu+tBRSxZTXaMV7nIOcpUKaXMA261etBtl7qoNUIdW34NJ2j9q1i3SyQZzH+WGNyFaYpz2CBydoihQfURfeimR+1fhRDfj5SpnxNuPpnhhTM8YFrdCJuDeMAZfYaH1eLOy0a1zxdK3GQc7GzXZTM5HSDO/qKbrzSqt4VSGQCWQV5sWgm5SFXKBkZbFoFdFi8IcoHlvAIVevSeQCq6yGGodiuLi0VaJaF6J5GqWel17OrnddUtW/EC+8eaw/+H28bcRtvGHFaZuu2lbb7mnd6N5+9dvH90OO+3s9dhr0pnYf6+9OtBpb28qdaIvH8M0IwxrRoS2iq14vZ0ikQTOs7gAG1RSLJOvOfoYCxFdDcp7zVAhgdV5qtFfU2eej5JSYp4lSCjH6M1zkzJ1pU/xs8PXJ5uPakqm/l/adtHlcHRD9nhm5RpXKJVhhS7GgZPPAveI9hD7Qb3s0B0iAZgIFTwGlQoN1Q/MO8QEXYoV8qiNbMkWr69cMuHekFNbjFqEacFehodJu/sWxszqKl4ZZFUourMSKGj62ddtyIV0XiTHgnMFi5+ur5/u3zmq/3ocfOn5cA14nnT0ykMzjUVfDaQeY8dacwYHzzQO/QEOrl7ayNDAMUhGKepuWxaNqHkU134aNXXooUENNoSUpRRzqAqqOrIF
|
|||
|
"text/plain": [
|
|||
|
"<Figure size 432x288 with 1 Axes>"
|
|||
|
]
|
|||
|
},
|
|||
|
"metadata": {
|
|||
|
"needs_background": "light"
|
|||
|
},
|
|||
|
"output_type": "display_data"
|
|||
|
},
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"GroundTruth: cat ship ship plane\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"dataiter = iter(testloader)\n",
|
|||
|
"images, labels = dataiter.next()\n",
|
|||
|
"\n",
|
|||
|
"# print images\n",
|
|||
|
"imshow(torchvision.utils.make_grid(images))\n",
|
|||
|
"print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Next, let's load back in our saved model (note: saving and re-loading the model\n",
|
|||
|
"wasn't necessary here, we only did it to illustrate how to do so):\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 10,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"<All keys matched successfully>"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": 10,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"net = Net()\n",
|
|||
|
"net.load_state_dict(torch.load(PATH))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Okay, now let us see what the neural network thinks these examples above are:\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 11,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"outputs = net(images)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The outputs are energies for the 10 classes.\n",
|
|||
|
"The higher the energy for a class, the more the network\n",
|
|||
|
"thinks that the image is of the particular class.\n",
|
|||
|
"So, let's get the index of the highest energy:\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 12,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"Predicted: cat ship ship ship\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"_, predicted = torch.max(outputs, 1)\n",
|
|||
|
"\n",
|
|||
|
"print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]\n",
|
|||
|
" for j in range(4)))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The results seem pretty good.\n",
|
|||
|
"\n",
|
|||
|
"Let us look at how the network performs on the whole dataset.\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 13,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"Accuracy of the network on the 10000 test images: 54 %\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"correct = 0\n",
|
|||
|
"total = 0\n",
|
|||
|
"with torch.no_grad():\n",
|
|||
|
" for data in testloader:\n",
|
|||
|
" images, labels = data\n",
|
|||
|
" outputs = net(images)\n",
|
|||
|
" _, predicted = torch.max(outputs.data, 1)\n",
|
|||
|
" total += labels.size(0)\n",
|
|||
|
" correct += (predicted == labels).sum().item()\n",
|
|||
|
"\n",
|
|||
|
"print('Accuracy of the network on the 10000 test images: %d %%' % (\n",
|
|||
|
" 100 * correct / total))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"That looks way better than chance, which is 10% accuracy (randomly picking\n",
|
|||
|
"a class out of 10 classes).\n",
|
|||
|
"Seems like the network learnt something.\n",
|
|||
|
"\n",
|
|||
|
"Hmmm, what are the classes that performed well, and the classes that did\n",
|
|||
|
"not perform well:\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 14,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"Accuracy of plane : 70 %\n",
|
|||
|
"Accuracy of car : 66 %\n",
|
|||
|
"Accuracy of bird : 39 %\n",
|
|||
|
"Accuracy of cat : 34 %\n",
|
|||
|
"Accuracy of deer : 56 %\n",
|
|||
|
"Accuracy of dog : 37 %\n",
|
|||
|
"Accuracy of frog : 61 %\n",
|
|||
|
"Accuracy of horse : 59 %\n",
|
|||
|
"Accuracy of ship : 63 %\n",
|
|||
|
"Accuracy of truck : 56 %\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"class_correct = list(0. for i in range(10))\n",
|
|||
|
"class_total = list(0. for i in range(10))\n",
|
|||
|
"with torch.no_grad():\n",
|
|||
|
" for data in testloader:\n",
|
|||
|
" images, labels = data\n",
|
|||
|
" outputs = net(images)\n",
|
|||
|
" _, predicted = torch.max(outputs, 1)\n",
|
|||
|
" c = (predicted == labels).squeeze()\n",
|
|||
|
" for i in range(4):\n",
|
|||
|
" label = labels[i]\n",
|
|||
|
" class_correct[label] += c[i].item()\n",
|
|||
|
" class_total[label] += 1\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"for i in range(10):\n",
|
|||
|
" print('Accuracy of %5s : %2d %%' % (\n",
|
|||
|
" classes[i], 100 * class_correct[i] / class_total[i]))"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Okay, so what next?\n",
|
|||
|
"\n",
|
|||
|
"How do we run these neural networks on the GPU?\n",
|
|||
|
"\n",
|
|||
|
"Training on GPU\n",
|
|||
|
"----------------\n",
|
|||
|
"Just like how you transfer a Tensor onto the GPU, you transfer the neural\n",
|
|||
|
"net onto the GPU.\n",
|
|||
|
"\n",
|
|||
|
"Let's first define our device as the first visible cuda device if we have\n",
|
|||
|
"CUDA available:\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 15,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"cuda:0\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
|
|||
|
"\n",
|
|||
|
"# Assuming that we are on a CUDA machine, this should print a CUDA device:\n",
|
|||
|
"\n",
|
|||
|
"print(device)"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"The rest of this section assumes that ``device`` is a CUDA device.\n",
|
|||
|
"\n",
|
|||
|
"Then these methods will recursively go over all modules and convert their\n",
|
|||
|
"parameters and buffers to CUDA tensors:\n",
|
|||
|
"\n",
|
|||
|
".. code:: python\n",
|
|||
|
"\n",
|
|||
|
" net.to(device)\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"Remember that you will have to send the inputs and targets at every step\n",
|
|||
|
"to the GPU too:\n",
|
|||
|
"\n",
|
|||
|
".. code:: python\n",
|
|||
|
"\n",
|
|||
|
" inputs, labels = data[0].to(device), data[1].to(device)\n",
|
|||
|
"\n",
|
|||
|
"Why dont I notice MASSIVE speedup compared to CPU? Because your network\n",
|
|||
|
"is really small.\n",
|
|||
|
"\n",
|
|||
|
"**Exercise:** Try increasing the width of your network (argument 2 of\n",
|
|||
|
"the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –\n",
|
|||
|
"they need to be the same number), see what kind of speedup you get.\n",
|
|||
|
"\n",
|
|||
|
"**Goals achieved**:\n",
|
|||
|
"\n",
|
|||
|
"- Understanding PyTorch's Tensor library and neural networks at a high level.\n",
|
|||
|
"- Train a small neural network to classify images\n",
|
|||
|
"\n",
|
|||
|
"Training on multiple GPUs\n",
|
|||
|
"-------------------------\n",
|
|||
|
"If you want to see even more MASSIVE speedup using all of your GPUs,\n",
|
|||
|
"please check out :doc:`data_parallel_tutorial`.\n",
|
|||
|
"\n",
|
|||
|
"Where do I go next?\n",
|
|||
|
"-------------------\n",
|
|||
|
"\n",
|
|||
|
"- :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`\n",
|
|||
|
"- `Train a state-of-the-art ResNet network on imagenet`_\n",
|
|||
|
"- `Train a face generator using Generative Adversarial Networks`_\n",
|
|||
|
"- `Train a word-level language model using Recurrent LSTM networks`_\n",
|
|||
|
"- `More examples`_\n",
|
|||
|
"- `More tutorials`_\n",
|
|||
|
"- `Discuss PyTorch on the Forums`_\n",
|
|||
|
"- `Chat with other users on Slack`_\n",
|
|||
|
"\n",
|
|||
|
"\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": []
|
|||
|
}
|
|||
|
],
|
|||
|
"metadata": {
|
|||
|
"kernelspec": {
|
|||
|
"display_name": "Python 3",
|
|||
|
"language": "python",
|
|||
|
"name": "python3"
|
|||
|
},
|
|||
|
"language_info": {
|
|||
|
"codemirror_mode": {
|
|||
|
"name": "ipython",
|
|||
|
"version": 3
|
|||
|
},
|
|||
|
"file_extension": ".py",
|
|||
|
"mimetype": "text/x-python",
|
|||
|
"name": "python",
|
|||
|
"nbconvert_exporter": "python",
|
|||
|
"pygments_lexer": "ipython3",
|
|||
|
"version": "3.6.9"
|
|||
|
}
|
|||
|
},
|
|||
|
"nbformat": 4,
|
|||
|
"nbformat_minor": 1
|
|||
|
}
|