648 lines
14 KiB
Plaintext
648 lines
14 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%matplotlib inline"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"\n",
|
||
"What is PyTorch?\n",
|
||
"================\n",
|
||
"\n",
|
||
"It’s a Python-based scientific computing package targeted at two sets of\n",
|
||
"audiences:\n",
|
||
"\n",
|
||
"- A replacement for NumPy to use the power of GPUs\n",
|
||
"- a deep learning research platform that provides maximum flexibility\n",
|
||
" and speed\n",
|
||
"\n",
|
||
"Getting Started\n",
|
||
"---------------\n",
|
||
"\n",
|
||
"## Tensors\n",
|
||
"\n",
|
||
"Tensors are similar to NumPy’s ndarrays, with the addition being that\n",
|
||
"Tensors can also be used on a GPU to accelerate computing.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from __future__ import print_function\n",
|
||
"import torch"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<div class=\"alert alert-info\"><h4>Note</h4><p>An uninitialized matrix is declared,\n",
|
||
" but does not contain definite known\n",
|
||
" values before it is used. When an\n",
|
||
" uninitialized matrix is created,\n",
|
||
" whatever values were in the allocated\n",
|
||
" memory at the time will appear as the initial values.</p></div>\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Construct a 5x3 matrix, uninitialized:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 26,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[6.3417e-36, 0.0000e+00, 6.3419e-36],\n",
|
||
" [0.0000e+00, 1.1210e-43, 0.0000e+00],\n",
|
||
" [1.5695e-43, 0.0000e+00, 0.0000e+00],\n",
|
||
" [0.0000e+00, 6.3917e+04, 4.5559e-41],\n",
|
||
" [3.1636e+15, 0.0000e+00, 1.8077e-43]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.empty(5, 3)\n",
|
||
"print(x)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Construct a randomly initialized matrix:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 22,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[0.6721, 0.7974, 0.8837],\n",
|
||
" [0.6526, 0.6741, 0.4159],\n",
|
||
" [0.7239, 0.8301, 0.9470],\n",
|
||
" [0.7420, 0.4967, 0.1845],\n",
|
||
" [0.2672, 0.3700, 0.3739]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.rand(5, 3)\n",
|
||
"print(x)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Construct a matrix filled zeros and of dtype long:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 23,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[0, 0, 0],\n",
|
||
" [0, 0, 0],\n",
|
||
" [0, 0, 0],\n",
|
||
" [0, 0, 0],\n",
|
||
" [0, 0, 0]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.zeros(5, 3, dtype=torch.long)\n",
|
||
"print(x)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Construct a tensor directly from data:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([5.5000, 3.0000])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.tensor([5.5, 3])\n",
|
||
"print(x)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"or create a tensor based on an existing tensor. These methods\n",
|
||
"will reuse properties of the input tensor, e.g. dtype, unless\n",
|
||
"new values are provided by user\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 27,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[1., 1., 1.],\n",
|
||
" [1., 1., 1.],\n",
|
||
" [1., 1., 1.],\n",
|
||
" [1., 1., 1.],\n",
|
||
" [1., 1., 1.]], dtype=torch.float64)\n",
|
||
"tensor([[ 1.8133, -2.0788, 0.1688],\n",
|
||
" [-0.8336, -0.9961, -0.2995],\n",
|
||
" [ 1.5661, -0.0205, -0.1414],\n",
|
||
" [-2.0433, 0.0211, 2.0895],\n",
|
||
" [ 0.2971, -0.2518, 0.5030]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes\n",
|
||
"print(x)\n",
|
||
"\n",
|
||
"x = torch.randn_like(x, dtype=torch.float) # override dtype!\n",
|
||
"print(x) # result has the same size"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Get its size:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"torch.Size([5, 3])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(x.size())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<div class=\"alert alert-info\"><h4>Note</h4><p>``torch.Size`` is in fact a tuple, so it supports all tuple operations.</p></div>\n",
|
||
"\n",
|
||
"## Operations\n",
|
||
"\n",
|
||
"There are multiple syntaxes for operations. In the following\n",
|
||
"example, we will take a look at the addition operation.\n",
|
||
"\n",
|
||
"Addition: syntax 1\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 28,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[ 2.4870, -1.1092, 0.2733],\n",
|
||
" [-0.5093, -0.1695, 0.1134],\n",
|
||
" [ 2.4207, 0.2844, 0.7987],\n",
|
||
" [-1.3298, 0.4374, 2.0926],\n",
|
||
" [ 1.1103, 0.2101, 1.2337]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"y = torch.rand(5, 3)\n",
|
||
"print(x + y)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Addition: syntax 2\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[ 0.0166, 1.8655, 0.2933],\n",
|
||
" [ 3.2162, 0.1241, 0.9112],\n",
|
||
" [ 1.4397, 0.8543, 0.4838],\n",
|
||
" [ 0.6985, 0.5795, 0.2113],\n",
|
||
" [ 0.7467, -0.7956, 0.6495]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(torch.add(x, y))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Addition: providing an output tensor as argument\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 29,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[ 2.4870, -1.1092, 0.2733],\n",
|
||
" [-0.5093, -0.1695, 0.1134],\n",
|
||
" [ 2.4207, 0.2844, 0.7987],\n",
|
||
" [-1.3298, 0.4374, 2.0926],\n",
|
||
" [ 1.1103, 0.2101, 1.2337]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"result = torch.empty(5, 3)\n",
|
||
"torch.add(x, y, out=result)\n",
|
||
"print(result)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Addition: in-place\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([[ 0.0166, 1.8655, 0.2933],\n",
|
||
" [ 3.2162, 0.1241, 0.9112],\n",
|
||
" [ 1.4397, 0.8543, 0.4838],\n",
|
||
" [ 0.6985, 0.5795, 0.2113],\n",
|
||
" [ 0.7467, -0.7956, 0.6495]])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# adds x to y\n",
|
||
"y.add_(x)\n",
|
||
"print(y)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"<div class=\"alert alert-info\"><h4>Note</h4><p>Any operation that mutates a tensor in-place is post-fixed with an ``_``.\n",
|
||
" For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.</p></div>\n",
|
||
"\n",
|
||
"You can use standard NumPy-like indexing with all bells and whistles!\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([ 1.3796, -0.6919, 0.7494, -0.1942, -1.0191])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(x[:, 1])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Resizing: If you want to resize/reshape tensor, you can use ``torch.view``:\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 36,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.randn(4, 4)\n",
|
||
"y = x.view(16)\n",
|
||
"z = x.view(-1, 8) # the size -1 is inferred from other dimensions\n",
|
||
"print(x.size(), y.size(), z.size())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"If you have a one element tensor, use ``.item()`` to get the value as a\n",
|
||
"Python number\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([-0.0506])\n",
|
||
"-0.05061284825205803\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"x = torch.randn(1)\n",
|
||
"print(x)\n",
|
||
"print(x.item())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"**Read later:**\n",
|
||
"\n",
|
||
"\n",
|
||
" 100+ Tensor operations, including transposing, indexing, slicing,\n",
|
||
" mathematical operations, linear algebra, random numbers, etc.,\n",
|
||
" are described\n",
|
||
" `here <https://pytorch.org/docs/torch>`_.\n",
|
||
"\n",
|
||
"NumPy Bridge\n",
|
||
"------------\n",
|
||
"\n",
|
||
"Converting a Torch Tensor to a NumPy array and vice versa is a breeze.\n",
|
||
"\n",
|
||
"The Torch Tensor and NumPy array will share their underlying memory\n",
|
||
"locations (if the Torch Tensor is on CPU), and changing one will change\n",
|
||
"the other.\n",
|
||
"\n",
|
||
"## Converting a Torch Tensor to a NumPy Array\n",
|
||
"\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([1., 1., 1., 1., 1.])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"a = torch.ones(5)\n",
|
||
"print(a)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[1. 1. 1. 1. 1.]\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"b = a.numpy()\n",
|
||
"print(b)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"See how the numpy array changed in value.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 18,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"tensor([2., 2., 2., 2., 2.])\n",
|
||
"[2. 2. 2. 2. 2.]\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"a.add_(1)\n",
|
||
"print(a)\n",
|
||
"print(b)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Converting NumPy Array to Torch Tensor\n",
|
||
"See how changing the np array changed the Torch Tensor automatically\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 19,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[2. 2. 2. 2. 2.]\n",
|
||
"tensor([2., 2., 2., 2., 2.], dtype=torch.float64)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import numpy as np\n",
|
||
"a = np.ones(5)\n",
|
||
"b = torch.from_numpy(a)\n",
|
||
"np.add(a, 1, out=a)\n",
|
||
"print(a)\n",
|
||
"print(b)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"All the Tensors on the CPU except a CharTensor support converting to\n",
|
||
"NumPy and back.\n",
|
||
"\n",
|
||
"CUDA Tensors\n",
|
||
"------------\n",
|
||
"\n",
|
||
"Tensors can be moved onto any device using the ``.to`` method.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 37,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"1.7.0\n",
|
||
"tensor([[ 1.3566, 1.3653, 1.4052, -0.1960],\n",
|
||
" [ 1.6845, 0.8119, 1.1873, -0.4534],\n",
|
||
" [ 1.6096, -1.4654, 1.7330, 2.0145],\n",
|
||
" [ 0.6571, 2.4976, 2.0423, 0.8646]], device='cuda:0')\n",
|
||
"tensor([[ 1.3566, 1.3653, 1.4052, -0.1960],\n",
|
||
" [ 1.6845, 0.8119, 1.1873, -0.4534],\n",
|
||
" [ 1.6096, -1.4654, 1.7330, 2.0145],\n",
|
||
" [ 0.6571, 2.4976, 2.0423, 0.8646]], dtype=torch.float64)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# let us run this cell only if CUDA is available\n",
|
||
"# We will use ``torch.device`` objects to move tensors in and out of GPU\n",
|
||
"print(torch.__version__)\n",
|
||
"if torch.cuda.is_available():\n",
|
||
" device = torch.device(\"cuda\") # a CUDA device object\n",
|
||
" y = torch.ones_like(x, device=device) # directly create a tensor on GPU\n",
|
||
" x = x.to(device) # or just use strings ``.to(\"cuda\")``\n",
|
||
" z = x + y\n",
|
||
" print(z)\n",
|
||
" print(z.to(\"cpu\", torch.double)) # ``.to`` can also change dtype together!\n",
|
||
" "
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.6.9"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 1
|
||
}
|