zad_4
This commit is contained in:
parent
a0f7698b7d
commit
97bef3fb5c
20
.gitignore
vendored
Normal file
20
.gitignore
vendored
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
# ---> JupyterNotebooks
|
||||||
|
# gitignore template for Jupyter Notebooks
|
||||||
|
# website: http://jupyter.org/
|
||||||
|
|
||||||
|
.ipynb_checkpoints
|
||||||
|
*/.ipynb_checkpoints/*
|
||||||
|
|
||||||
|
# IPython
|
||||||
|
profile_default/
|
||||||
|
ipython_config.py
|
||||||
|
|
||||||
|
# Remove previous ipynb_checkpoints
|
||||||
|
# git rm -r .ipynb_checkpoints/
|
||||||
|
|
||||||
|
datasets/att_faces/
|
||||||
|
datasets/glasses/
|
||||||
|
datasets/INRIAPerson/
|
||||||
|
datasets/yaleextb/
|
||||||
|
vid/gen-*
|
||||||
|
dnn/
|
File diff suppressed because one or more lines are too long
27
README.md
Normal file
27
README.md
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
# Widzenie komputerowe – materiały do zajęć
|
||||||
|
|
||||||
|
## Cele przedmiotu
|
||||||
|
|
||||||
|
Celem kursu jest nabycie umiejętności wykorzystywania klasycznych i współczesnych algorytmów z dziedziny widzenia komputerowego. Uczestnicy zajęć będą przetwarzać obrazy cyfrowe oraz materiały wideo w celu odwzorowania zadań, do których jest przystosowany system wzrokowy człowieka, np. takich jak rozpoznawanie i śledzenie obiektów. Kurs będzie opierał się na praktycznym wykorzystaniu biblioteki OpenCV.
|
||||||
|
|
||||||
|
## Wymagania wstępne w zakresie wiedzy, umiejętności oraz kompetencji społecznych
|
||||||
|
|
||||||
|
* Umiejętność programowania na poziomie inżyniera informatyki.
|
||||||
|
* Znajomość podstaw uczenia maszynowego.
|
||||||
|
|
||||||
|
## Oprogramowanie
|
||||||
|
|
||||||
|
* Ubuntu 20.04
|
||||||
|
* Python 3.7
|
||||||
|
* OpenCV 4.5.3
|
||||||
|
* FFmpeg 4.1.8
|
||||||
|
|
||||||
|
## Literatura
|
||||||
|
|
||||||
|
* L. Venturi, & K. Korda (2020). Hands-On Vision and Behavior for Self-Driving Cars. Packt Publishing.
|
||||||
|
* D.M. Escriva, & R. Laganiere (2019). OpenCV 4 Computer Vision Application Programming Cookbook (wyd. 4). Packt Publishing.
|
||||||
|
* A.F. Villan (2019). Mastering OpenCV 4 with Python. Packt Publishing.
|
||||||
|
* E.R. Davies (2017). Computer Vision: Principles, Algorithms, Applications, Learning (wyd. 5). Academic Press.
|
||||||
|
* R. Szeliski (2021). Computer Vision: Algorithms and Applications. Springer-Verlag.
|
||||||
|
* D. Fouhey, & J. Johnson (2021). EECS 442: Computer Vision. University of Michigan.
|
||||||
|
* S. Malick (2021). LearnOpenCV.
|
BIN
datasets/att_faces.zip
Normal file
BIN
datasets/att_faces.zip
Normal file
Binary file not shown.
BIN
datasets/glasses.zip
Normal file
BIN
datasets/glasses.zip
Normal file
Binary file not shown.
BIN
datasets/inria-person-sub.zip
Normal file
BIN
datasets/inria-person-sub.zip
Normal file
Binary file not shown.
BIN
datasets/yaleextb.zip
Normal file
BIN
datasets/yaleextb.zip
Normal file
Binary file not shown.
BIN
vid/bike.mp4
Normal file
BIN
vid/bike.mp4
Normal file
Binary file not shown.
BIN
vid/blinking-man.mp4
Normal file
BIN
vid/blinking-man.mp4
Normal file
Binary file not shown.
BIN
vid/blinking-woman1.mp4
Normal file
BIN
vid/blinking-woman1.mp4
Normal file
Binary file not shown.
BIN
vid/blinking-woman2.mp4
Normal file
BIN
vid/blinking-woman2.mp4
Normal file
Binary file not shown.
BIN
vid/football.mp4
Normal file
BIN
vid/football.mp4
Normal file
Binary file not shown.
BIN
vid/protest.mp4
Normal file
BIN
vid/protest.mp4
Normal file
Binary file not shown.
1834
wko-01.ipynb
Normal file
1834
wko-01.ipynb
Normal file
File diff suppressed because one or more lines are too long
1319
wko-02.ipynb
Normal file
1319
wko-02.ipynb
Normal file
File diff suppressed because one or more lines are too long
309
wko-03.ipynb
309
wko-03.ipynb
File diff suppressed because one or more lines are too long
654
wko-04.ipynb
Normal file
654
wko-04.ipynb
Normal file
File diff suppressed because one or more lines are too long
1166
wko-05.ipynb
Normal file
1166
wko-05.ipynb
Normal file
File diff suppressed because one or more lines are too long
620
wko-06.ipynb
Normal file
620
wko-06.ipynb
Normal file
@ -0,0 +1,620 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "819ce420",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![Logo 1](img/aitech-logotyp-1.jpg)\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<h1> Widzenie komputerowe </h1>\n",
|
||||||
|
"<h2> 06. <i>Rozpoznawanie i segmentacja obrazów</i> [laboratoria]</h2> \n",
|
||||||
|
"<h3>Andrzej Wójtowicz (2021)</h3>\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"![Logo 2](img/aitech-logotyp-2.jpg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "a6dc5acc",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W poniższych materiałach zobaczymy w jaki sposób możemy klasycznym podejściem rozpoznawać ludzi na zdjęciach, a ponadto w jaki sposób szybko podzielić obraz na elementy znajdujące się na pierwszym planie i w tle obrazu.\n",
|
||||||
|
"\n",
|
||||||
|
"Na początku załadujmy niezbędne biblioteki."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "e45bb312",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import cv2 as cv\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import sklearn.svm\n",
|
||||||
|
"import sklearn.metrics\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"%matplotlib inline\n",
|
||||||
|
"import os\n",
|
||||||
|
"import random"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5b757675",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Naszym głównym celem będzie rozpoznawanie ludzi na zdjęciach przy pomocy klasycznej metody *histogram of oriented gradients* (HOG). Krótko mówiąc, dla danego zdjęcia chcemy uzyskać wektor cech, który będziemy mogli wykorzystać w klasyfikatorze SVM. Szczegóły znajdują się w *6.3.2 Pedestrian detection* R. Szeliski (2022) *Computer Vision: Algorithms and Applications*, natomiast tutaj zobrazujemy techniczne wykorzystanie tej metody.\n",
|
||||||
|
"\n",
|
||||||
|
"# Klasyfikacja obrazów przy użyciu HOG i SVM\n",
|
||||||
|
"\n",
|
||||||
|
"Spróbjemy zbudować klasyfikator, który wskazuje czy na zdjęciu znajduje się osoba z okularami czy bez okularów. Rozpakujmy zbiór danych, z którego będziemy korzystali:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "7b953b82",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!cd datasets && unzip -qo glasses.zip"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f4a457f3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Następnie wczytujemy dane i dzielimy je na dwa zbiory w proporcjach 80/20:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "737d95c1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"dataset_dir = \"datasets/glasses\"\n",
|
||||||
|
"images_0 = os.listdir(f\"{dataset_dir}/with\")\n",
|
||||||
|
"images_0 = [f\"{dataset_dir}/with/{x}\" for x in images_0]\n",
|
||||||
|
"images_1 = os.listdir(f\"{dataset_dir}/without\")\n",
|
||||||
|
"images_1 = [f\"{dataset_dir}/without/{x}\" for x in images_1]\n",
|
||||||
|
"images = images_0 + images_1\n",
|
||||||
|
"random.seed(1337)\n",
|
||||||
|
"random.shuffle(images)\n",
|
||||||
|
"\n",
|
||||||
|
"train_data = []\n",
|
||||||
|
"test_data = []\n",
|
||||||
|
"train_labels = []\n",
|
||||||
|
"test_labels = []\n",
|
||||||
|
"\n",
|
||||||
|
"splitval = int((1-0.2)*len(images))\n",
|
||||||
|
"\n",
|
||||||
|
"for x in images[:splitval]:\n",
|
||||||
|
" train_data.append(cv.imread(x, cv.IMREAD_COLOR))\n",
|
||||||
|
" train_labels.append(x.split(\"/\")[2])\n",
|
||||||
|
" \n",
|
||||||
|
"for x in images[splitval:]:\n",
|
||||||
|
" test_data.append(cv.imread(x, cv.IMREAD_COLOR))\n",
|
||||||
|
" test_labels.append(x.split(\"/\")[2])\n",
|
||||||
|
" \n",
|
||||||
|
"d_labels = {\"with\": 0, \"without\": 1}\n",
|
||||||
|
" \n",
|
||||||
|
"train_labels = np.array([d_labels[x] for x in train_labels])\n",
|
||||||
|
"test_labels = np.array([d_labels[x] for x in test_labels])\n",
|
||||||
|
"\n",
|
||||||
|
"print(f\"Train data: {len(train_data)}, test data: {len(test_data)}\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "265147e3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Poniżej znajduje się kilka przykładowych zdjęć."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "e0595915",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"plt.figure(figsize=(10,2))\n",
|
||||||
|
"for i in range(5):\n",
|
||||||
|
" plt.subplot(151 + i)\n",
|
||||||
|
" plt.imshow(train_data[i][:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e05e27e8",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Tworzymy deskryptor HOG przy pomocy funkcji [`cv.HOGDescriptor()`](https://docs.opencv.org/4.5.3/d5/d33/structcv_1_1HOGDescriptor.html). Metodą [`compute()`](https://docs.opencv.org/4.5.3/d5/d33/structcv_1_1HOGDescriptor.html#a38cd712cd5a6d9ed0344731fcd121e8b) tworzymy wektory cech, które posłużą nam jako dane wejściowe do klasyfikatora. Poniżej znajduje się również przykładowa konfiguracja deskryptora:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "f21e8924",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"hp_win_size = (96, 32)\n",
|
||||||
|
"hp_block_size = (8, 8)\n",
|
||||||
|
"hp_block_stride = (8, 8)\n",
|
||||||
|
"hp_cell_size = (4, 4)\n",
|
||||||
|
"hp_n_bins = 9\n",
|
||||||
|
"hp_deriv_aperture = 0\n",
|
||||||
|
"hp_win_sigma = 4.0\n",
|
||||||
|
"hp_histogram_norm_type = 1\n",
|
||||||
|
"hp_l2_hys_threshold = 0.2\n",
|
||||||
|
"hp_gamma_correction = True\n",
|
||||||
|
"hp_n_levels = 64\n",
|
||||||
|
"hp_signed_gradient = True\n",
|
||||||
|
"\n",
|
||||||
|
"hog_descriptor = cv.HOGDescriptor(\n",
|
||||||
|
" hp_win_size, hp_block_size, hp_block_stride, hp_cell_size, \n",
|
||||||
|
" hp_n_bins, hp_deriv_aperture, hp_win_sigma, \n",
|
||||||
|
" hp_histogram_norm_type, hp_l2_hys_threshold, \n",
|
||||||
|
" hp_gamma_correction, hp_n_levels, hp_signed_gradient)\n",
|
||||||
|
"\n",
|
||||||
|
"train_hog = np.vstack([hog_descriptor.compute(x).ravel() for x in train_data])\n",
|
||||||
|
"test_hog = np.vstack([hog_descriptor.compute(x).ravel() for x in test_data])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "755b8ebe",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Do klasyfikacji użyjemy klasyfikatora SVM. Możemy użyć implementacji znajdującej się w module [`cv.ml`](https://docs.opencv.org/4.5.3/d1/d2d/classcv_1_1ml_1_1SVM.html):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b46783d4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.ml.SVM_create()\n",
|
||||||
|
"model.setGamma(0.02)\n",
|
||||||
|
"model.setC(2.5)\n",
|
||||||
|
"model.setKernel(cv.ml.SVM_RBF)\n",
|
||||||
|
"model.setType(cv.ml.SVM_C_SVC)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d8f47c54",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Trenujemy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "810f9a1e",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model.train(np.array(train_hog), cv.ml.ROW_SAMPLE, train_labels);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "69d39eee",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Sprawdzamy wynik na danych testowych:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "763b6dc7",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"predictions = model.predict(test_hog)[1].ravel()\n",
|
||||||
|
"accuracy = (test_labels == predictions).mean()\n",
|
||||||
|
"print(f\"ACC: {accuracy * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "2dd04ec5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy również użyć implementacji klasyfikatora znajdującej się w bibliotece [`scikit-learn`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "13b7ba1c",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = sklearn.svm.SVC(C=2.5, gamma=0.02, kernel='rbf')\n",
|
||||||
|
"model.fit(train_hog, train_labels)\n",
|
||||||
|
"\n",
|
||||||
|
"predictions = model.predict(test_hog)\n",
|
||||||
|
"accuracy = (test_labels == predictions).mean()\n",
|
||||||
|
"print(f\"ACC: {accuracy * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "2259c310",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Rozpoznawanie ludzi\n",
|
||||||
|
"\n",
|
||||||
|
"Powyższą metodykę klasyfikcji możemy zastosować do rozpoznawania obiektów na zdjęciach, np. ludzi. W tym wypadku będziemy chcieli wskazać gdzie na zdjęciu znajduje się dany obiekt lub obiekty.\n",
|
||||||
|
"\n",
|
||||||
|
"Rozpocznijmy od rozpakowania zbioru danych:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d8497390",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!cd datasets && unzip -qo inria-person-sub.zip"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "30374bad",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wczytujemy dane, które są już podzielone na dwa zbiory:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "978d77cf",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"dataset_dir = \"datasets/INRIAPerson\"\n",
|
||||||
|
"\n",
|
||||||
|
"images_train_0 = os.listdir(f\"{dataset_dir}/train_64x128_H96/negPatches\")\n",
|
||||||
|
"images_train_0 = [f\"{dataset_dir}/train_64x128_H96/negPatches/{x}\" for x in images_train_0]\n",
|
||||||
|
"images_train_1 = os.listdir(f\"{dataset_dir}/train_64x128_H96/posPatches\")\n",
|
||||||
|
"images_train_1 = [f\"{dataset_dir}/train_64x128_H96/posPatches/{x}\" for x in images_train_1]\n",
|
||||||
|
"\n",
|
||||||
|
"images_test_0 = os.listdir(f\"{dataset_dir}/test_64x128_H96/negPatches\")\n",
|
||||||
|
"images_test_0 = [f\"{dataset_dir}/test_64x128_H96/negPatches/{x}\" for x in images_test_0]\n",
|
||||||
|
"images_test_1 = os.listdir(f\"{dataset_dir}/test_64x128_H96/posPatches\")\n",
|
||||||
|
"images_test_1 = [f\"{dataset_dir}/test_64x128_H96/posPatches/{x}\" for x in images_test_1]\n",
|
||||||
|
"\n",
|
||||||
|
"train_data = []\n",
|
||||||
|
"test_data = []\n",
|
||||||
|
"train_labels = []\n",
|
||||||
|
"test_labels = []\n",
|
||||||
|
"\n",
|
||||||
|
"for x in images_train_0:\n",
|
||||||
|
" img = cv.imread(x, cv.IMREAD_COLOR)\n",
|
||||||
|
" if img is not None:\n",
|
||||||
|
" train_data.append(img)\n",
|
||||||
|
" train_labels.append(0)\n",
|
||||||
|
"\n",
|
||||||
|
"for x in images_train_1:\n",
|
||||||
|
" img = cv.imread(x, cv.IMREAD_COLOR)\n",
|
||||||
|
" if img is not None:\n",
|
||||||
|
" train_data.append(img)\n",
|
||||||
|
" train_labels.append(1)\n",
|
||||||
|
" \n",
|
||||||
|
"for x in images_test_0:\n",
|
||||||
|
" img = cv.imread(x, cv.IMREAD_COLOR)\n",
|
||||||
|
" if img is not None:\n",
|
||||||
|
" test_data.append(img)\n",
|
||||||
|
" test_labels.append(0)\n",
|
||||||
|
"\n",
|
||||||
|
"for x in images_test_1:\n",
|
||||||
|
" img = cv.imread(x, cv.IMREAD_COLOR)\n",
|
||||||
|
" if img is not None:\n",
|
||||||
|
" test_data.append(img)\n",
|
||||||
|
" test_labels.append(1)\n",
|
||||||
|
"\n",
|
||||||
|
"print(f\"Train data: {len(train_data)}, test data: {len(test_data)}\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9bf41d6e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Poniżej znajduje się kilka przykładowych zdjęć ze zbioru:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "f29d47c1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"plt.figure(figsize=(10,2))\n",
|
||||||
|
"for i in range(3):\n",
|
||||||
|
" plt.subplot(161 + i)\n",
|
||||||
|
" plt.imshow(train_data[i][:,:,::-1]);\n",
|
||||||
|
"for i in range(3):\n",
|
||||||
|
" plt.subplot(164 + i)\n",
|
||||||
|
" plt.imshow(train_data[-(i+1)][:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "57cec468",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Tworzymy deskryptor i wektory cech:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d5248df2",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"hp_win_size = (64, 128)\n",
|
||||||
|
"hp_block_size = (16, 16)\n",
|
||||||
|
"hp_block_stride = (8, 8)\n",
|
||||||
|
"hp_cell_size = (8, 8)\n",
|
||||||
|
"hp_n_bins = 9\n",
|
||||||
|
"hp_deriv_aperture = 1\n",
|
||||||
|
"hp_win_sigma = -1\n",
|
||||||
|
"hp_histogram_norm_type = 0\n",
|
||||||
|
"hp_l2_hys_threshold = 0.2\n",
|
||||||
|
"hp_gamma_correction = True\n",
|
||||||
|
"hp_n_levels = 64\n",
|
||||||
|
"hp_signed_gradient = False\n",
|
||||||
|
"\n",
|
||||||
|
"hog_descriptor = cv.HOGDescriptor(\n",
|
||||||
|
" hp_win_size, hp_block_size, hp_block_stride, hp_cell_size, \n",
|
||||||
|
" hp_n_bins, hp_deriv_aperture, hp_win_sigma, \n",
|
||||||
|
" hp_histogram_norm_type, hp_l2_hys_threshold, \n",
|
||||||
|
" hp_gamma_correction, hp_n_levels, hp_signed_gradient)\n",
|
||||||
|
"\n",
|
||||||
|
"train_hog = np.vstack([hog_descriptor.compute(x).ravel() for x in train_data])\n",
|
||||||
|
"test_hog = np.vstack([hog_descriptor.compute(x).ravel() for x in test_data])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c6782aa9",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Następnie tworzymy klasyfikator:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "8f6108ed",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.ml.SVM_create()\n",
|
||||||
|
"model.setGamma(0)\n",
|
||||||
|
"model.setC(0.01)\n",
|
||||||
|
"model.setKernel(cv.ml.SVM_LINEAR)\n",
|
||||||
|
"model.setType(cv.ml.SVM_C_SVC)\n",
|
||||||
|
"model.setTermCriteria((cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 1000, 1e-3))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "bbfbde58",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Uczymy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "afd0bbb4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model.train(np.array(train_hog), cv.ml.ROW_SAMPLE, np.array(train_labels));"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "09626eed",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Sprawdzamy jakość klasyfikacji:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "fa3be6b6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"predictions = model.predict(test_hog)[1].ravel()\n",
|
||||||
|
"accuracy = (test_labels == predictions).mean()\n",
|
||||||
|
"print(f\"ACC: {accuracy * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c6df6682",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Poniżej znajduje się podejście przy pomocy biblioteki *scikit-learn*:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "7b3de8d1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model2 = sklearn.svm.SVC(C=0.01, gamma='auto', kernel='linear', max_iter=1000)\n",
|
||||||
|
"model2.fit(train_hog, train_labels)\n",
|
||||||
|
"\n",
|
||||||
|
"predictions = model2.predict(test_hog)\n",
|
||||||
|
"accuracy = (test_labels == predictions).mean()\n",
|
||||||
|
"print(f\"Accuracy: {sklearn.metrics.accuracy_score(test_labels, predictions) * 100:.2f} %\")\n",
|
||||||
|
"print(f\"Precision: {sklearn.metrics.precision_score(test_labels, predictions) * 100:.2f} %\")\n",
|
||||||
|
"print(f\"Recall: {sklearn.metrics.recall_score(test_labels, predictions) * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "6e84c568",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Mając teraz wyuczony model, chcielibyśmy sprawdzić czy np. na zdjęciu `img/pedestrians.jpg` znajdują się ludzie, tak aby uzyskać ew. obramowania z ich występowaniem. W pierwszej kolejności w naszym deskryptorze HOG ustawiamy współczynniki klasfikatora SVM przy pomocy metody [`setSVMDetector()`](https://docs.opencv.org/4.5.3/d5/d33/structcv_1_1HOGDescriptor.html#a6de5ac55631eed51e36278cde3a2c159). Następnie przy pomocy metody [`detectMultiScale()`](https://docs.opencv.org/4.5.3/d5/d33/structcv_1_1HOGDescriptor.html#a91e56a2c317392e50fbaa2f5dc78d30b) znajdujemy wyszukiwane obiekty (ludzi) w różnych skalach."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d6458103",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread(\"img/pedestrians.jpg\", cv.IMREAD_COLOR)\n",
|
||||||
|
"scale = 600 / image.shape[0]\n",
|
||||||
|
"image = cv.resize(image, None, fx=scale, fy=scale)\n",
|
||||||
|
"\n",
|
||||||
|
"support_vectors = model.getSupportVectors()\n",
|
||||||
|
"rho, _, _ = model.getDecisionFunction(0)\n",
|
||||||
|
"detector = np.zeros(support_vectors.shape[1] + 1, dtype=support_vectors.dtype)\n",
|
||||||
|
"detector[:-1] = -support_vectors[:]\n",
|
||||||
|
"detector[-1] = rho\n",
|
||||||
|
"\n",
|
||||||
|
"hog_descriptor.setSVMDetector(detector)\n",
|
||||||
|
"\n",
|
||||||
|
"locations, weights = hog_descriptor.detectMultiScale(\n",
|
||||||
|
" image, winStride=(8, 8), padding=(32, 32), scale=1.05,\n",
|
||||||
|
" finalThreshold=2, hitThreshold=1.0)\n",
|
||||||
|
"\n",
|
||||||
|
"for location, weight in zip(locations, weights):\n",
|
||||||
|
" x1, y1, w, h = location\n",
|
||||||
|
" x2, y2 = x1 + w, y1 + h\n",
|
||||||
|
" cv.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), thickness=3, lineType=cv.LINE_AA)\n",
|
||||||
|
" cv.putText(image, f\"{weight[0]:.2f}\", (x1,y1), cv.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv.LINE_AA)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(6,6))\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "cd287c92",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Coś nam nawet udało się wykryć jak na tak niewielki zbiór danych uczących ;) Z drugiej strony, dwie osoby na pierwszym planie zostały pominięte, a osoba po prawej jest dyskusyjna jeśli chodzi o zakres oznaczenia.\n",
|
||||||
|
"\n",
|
||||||
|
"W OpenCV dostępny jest domyślny klasyfikator w funkcji [`HOGDescriptor_getDefaultPeopleDetector()`](https://docs.opencv.org/4.5.3/d5/d33/structcv_1_1HOGDescriptor.html#a9c7a0b2aa72cf39b4b32b3eddea78203) i poniżej możemy zobaczyć jak sobie radzi na badanym zdjęciu:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "57a745c9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread(\"img/pedestrians.jpg\", cv.IMREAD_COLOR)\n",
|
||||||
|
"scale = 600 / image.shape[0]\n",
|
||||||
|
"image = cv.resize(image, None, fx=scale, fy=scale)\n",
|
||||||
|
"\n",
|
||||||
|
"hog_dflt_descriptor = cv.HOGDescriptor(\n",
|
||||||
|
" hp_win_size, hp_block_size, hp_block_stride, hp_cell_size, \n",
|
||||||
|
" hp_n_bins, hp_deriv_aperture, hp_win_sigma, \n",
|
||||||
|
" hp_histogram_norm_type, hp_l2_hys_threshold, \n",
|
||||||
|
" hp_gamma_correction, hp_n_levels, hp_signed_gradient)\n",
|
||||||
|
"\n",
|
||||||
|
"detector_dflt = cv.HOGDescriptor_getDefaultPeopleDetector()\n",
|
||||||
|
"hog_dflt_descriptor.setSVMDetector(detector_dflt)\n",
|
||||||
|
"\n",
|
||||||
|
"locations, weights = hog_dflt_descriptor.detectMultiScale(\n",
|
||||||
|
" image, winStride=(8, 8), padding=(32, 32), scale=1.05,\n",
|
||||||
|
" finalThreshold=2, hitThreshold=1.0)\n",
|
||||||
|
"\n",
|
||||||
|
"for location, weight in zip(locations, weights):\n",
|
||||||
|
" x1, y1, w, h = location\n",
|
||||||
|
" x2, y2 = x1 + w, y1 + h\n",
|
||||||
|
" cv.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), thickness=3, lineType=cv.LINE_AA)\n",
|
||||||
|
" cv.putText(image, f\"{weight[0]:.2f}\", (x1,y1), cv.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv.LINE_AA)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(6,6))\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "6c8cf915",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Segmentacja obrazu metodą GrabCut\n",
|
||||||
|
"\n",
|
||||||
|
"## Zadanie 1\n",
|
||||||
|
"\n",
|
||||||
|
"W poniższym zadaniu użyjemy algorytmu [GrabCut](https://en.wikipedia.org/wiki/GrabCut), będącego interaktywną metodą segmentacji obrazu, dzielącą obraz na pierwszy i drugi plan. W OpenCV algorytm jest zaimplementowany w funkcji [`cv.grabCut()`](https://docs.opencv.org/4.5.3/d3/d47/group__imgproc__segmentation.html#ga909c1dda50efcbeaa3ce126be862b37f). Dodatkowe informacje o algorytmie znajdują się w [dokumentacji](https://docs.opencv.org/4.5.3/d8/d83/tutorial_py_grabcut.html).\n",
|
||||||
|
"\n",
|
||||||
|
"Przygotuj interaktywną aplikację, która wykorzystuje algorytm GrabCut. W aplikacji powinna być możliwość zaznaczenia początkowego prostokąta, a następnie elementy maski (zwróć uwagę z jakich elementów może składać się maska). Przykładowe działanie możesz zaprezentować na obrazie `img/messi5.jpg`."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "35f22bca",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![GrabCut - wynik](img/grabcut-result.png)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"author": "Andrzej Wójtowicz",
|
||||||
|
"email": "andre@amu.edu.pl",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"lang": "pl",
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.3"
|
||||||
|
},
|
||||||
|
"subtitle": "06. Segmentacja i rozpoznawanie obrazów [laboratoria]",
|
||||||
|
"title": "Widzenie komputerowe",
|
||||||
|
"year": "2021"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
595
wko-07.ipynb
Normal file
595
wko-07.ipynb
Normal file
@ -0,0 +1,595 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "73e26798",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![Logo 1](img/aitech-logotyp-1.jpg)\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<h1> Widzenie komputerowe </h1>\n",
|
||||||
|
"<h2> 07. <i>Analiza wideo: przepływ optyczny, śledzenie obiektów</i> [laboratoria]</h2> \n",
|
||||||
|
"<h3>Andrzej Wójtowicz (2021)</h3>\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"![Logo 2](img/aitech-logotyp-2.jpg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f124f5cd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W poniższych materiałach zobaczymy w jaki sposób możemy przy pomocy przepływu optycznego dokonać stabilizacji obrazu oraz w jaki sposób śledzić obiekty znajdujące się na filmie.\n",
|
||||||
|
"\n",
|
||||||
|
"Na początku załadujmy niezbędne biblioteki."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "ed69629c",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import cv2 as cv\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"%matplotlib inline\n",
|
||||||
|
"import IPython.display"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f9ef8d67",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Przepływ optyczny\n",
|
||||||
|
"\n",
|
||||||
|
"Naszym celem będzie znalezienie na poniższym filmie punktów kluczowych, które pozwolą nam w jakiś sposób sprawdzić jak przemieszcza się rowerzystka:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "1629fc29",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"IPython.display.Video(\"vid/bike.mp4\", width=800)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "aa176c9a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Załadujmy film:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "2463bd1d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"bike = cv.VideoCapture(\"vid/bike.mp4\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "dde041a3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przy pomocy algorytmu Shi-Tomasi (rozwinięcie metody Harrisa) możemy znaleźć narożniki, które dobrze nadają się do śledzenia. W OpenCV algorytm jest zaimplementowany w funkcji [`cv.goodFeaturesToTrack()`](https://docs.opencv.org/4.5.3/dd/d1a/group__imgproc__feature.html#ga1d6bb77486c8f92d79c8793ad995d541):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "36492aa6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"corners_num = 100\n",
|
||||||
|
"corners_colors = np.random.randint(0, 255, (corners_num, 3))\n",
|
||||||
|
"\n",
|
||||||
|
"_, frame_1 = bike.read()\n",
|
||||||
|
"frame_1_gray = cv.cvtColor(frame_1, cv.COLOR_BGR2GRAY)\n",
|
||||||
|
"keypoints_1 = cv.goodFeaturesToTrack(\n",
|
||||||
|
" frame_1_gray, mask=None, maxCorners=corners_num,\n",
|
||||||
|
" qualityLevel=0.3, minDistance=7, blockSize=7)\n",
|
||||||
|
"\n",
|
||||||
|
"mask = np.zeros_like(frame_1)\n",
|
||||||
|
"count = 0"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "028dded7",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Aby sprawdzić w jaki sposób punkty przemieszczają się pomiędzy kolejnymi klatkami filmu, wykorzystamy algorytm Lucasa–Kanade, który jest zaimplementowany w funkcji [`cv.calcOpticalFlowPyrLK()`](https://docs.opencv.org/4.5.3/dc/d6b/group__video__track.html#ga473e4b886d0bcc6b65831eb88ed93323):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "14b62820",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"while True:\n",
|
||||||
|
" _, frame_2 = bike.read()\n",
|
||||||
|
" frame_2_gray = cv.cvtColor(frame_2, cv.COLOR_BGR2GRAY)\n",
|
||||||
|
" count += 1\n",
|
||||||
|
"\n",
|
||||||
|
" keypoints_2, status, _ = cv.calcOpticalFlowPyrLK(\n",
|
||||||
|
" frame_1_gray, frame_2_gray, keypoints_1, None, winSize=(15, 15),\n",
|
||||||
|
" maxLevel=2, criteria=(cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 0.03))\n",
|
||||||
|
"\n",
|
||||||
|
" keypoints_2_good = keypoints_2[status==1]\n",
|
||||||
|
" keypoints_1_good = keypoints_1[status==1]\n",
|
||||||
|
" \n",
|
||||||
|
" for i, (kp2, kp1) in enumerate(zip(keypoints_2_good, keypoints_1_good)):\n",
|
||||||
|
" a, b = kp2.ravel()\n",
|
||||||
|
" a, b = int(a), int(b)\n",
|
||||||
|
" c, d = kp1.ravel()\n",
|
||||||
|
" c, d = int(c), int(d)\n",
|
||||||
|
" cv.line(mask, (a, b), (c, d), corners_colors[i].tolist(), 8, cv.LINE_AA)\n",
|
||||||
|
" cv.circle(frame_2, (a ,b), 9, corners_colors[i].tolist(), -1)\n",
|
||||||
|
" \n",
|
||||||
|
" display_frame = cv.add(frame_2, mask)\n",
|
||||||
|
" if count % 5 == 0:\n",
|
||||||
|
" plt.figure(figsize=(7,7))\n",
|
||||||
|
" plt.imshow(display_frame[:,:,::-1])\n",
|
||||||
|
" if count > 40:\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
" frame_1_gray = frame_2_gray.copy()\n",
|
||||||
|
" keypoints_1 = keypoints_2_good.reshape(-1,1,2)\n",
|
||||||
|
" \n",
|
||||||
|
"bike.release()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d8c01f59",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy zauważyć, że część punktów kluczowych została wykryta poza głównym śledzonym obiektem, jednak mimo wszystko jesteśmy w stanie określić główny ruch przemieszczającego się obiektu."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "879a813e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Stabilizacja obrazu\n",
|
||||||
|
"\n",
|
||||||
|
"Spróbujemy wykorzystać przepływ optyczny do stablizacji cyfrowej filmu nakręconego z ręki:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d8686953",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"IPython.display.Video(\"vid/protest.mp4\", width=800)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c3541db4",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Załadujmy film oraz przygotujmy film wyjściowy, który będziemy wyświetlać obok oryginalnego, tak aby móc porównać otrzymane wyniki:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "0b18aad1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"cap = cv.VideoCapture(\"vid/protest.mp4\")\n",
|
||||||
|
"n_frames = int(cap.get(cv.CAP_PROP_FRAME_COUNT))\n",
|
||||||
|
"width = int(cap.get(cv.CAP_PROP_FRAME_WIDTH)) \n",
|
||||||
|
"height = int(cap.get(cv.CAP_PROP_FRAME_HEIGHT))\n",
|
||||||
|
"fps = cap.get(cv.CAP_PROP_FPS)\n",
|
||||||
|
"\n",
|
||||||
|
"out = cv.VideoWriter('vid/gen-protest.avi', cv.VideoWriter_fourcc(*'MJPG'), fps, (width*2, height))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "27355bd0",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Pomiędzy poszczególnymi klatkami filmu znajdujemy punkty kluczowe i śledzimy w jaki sposób się one przemieściły. Na tej podstawie przy pomocy [`cv.estimateAffinePartial2D()`](https://docs.opencv.org/4.5.3/d9/d0c/group__calib3d.html#gad767faff73e9cbd8b9d92b955b50062d) możemy oszacować transformacje (translację oraz obrót), które nastapiły między następującymi po sobie klatkami:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "c00b1e9d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"_, prev = cap.read()\n",
|
||||||
|
"prev_gray = cv.cvtColor(prev, cv.COLOR_BGR2GRAY)\n",
|
||||||
|
"\n",
|
||||||
|
"transforms = np.zeros((n_frames-1, 3), np.float32)\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(n_frames-2):\n",
|
||||||
|
" prev_pts = cv.goodFeaturesToTrack(prev_gray, maxCorners=200,\n",
|
||||||
|
" qualityLevel=0.01, minDistance=30, blockSize=3)\n",
|
||||||
|
" \n",
|
||||||
|
" success, curr = cap.read() \n",
|
||||||
|
" if not success: \n",
|
||||||
|
" break\n",
|
||||||
|
" curr_gray = cv.cvtColor(curr, cv.COLOR_BGR2GRAY) \n",
|
||||||
|
" \n",
|
||||||
|
" curr_pts, status, _ = cv.calcOpticalFlowPyrLK(prev_gray, curr_gray, prev_pts, None)\n",
|
||||||
|
" \n",
|
||||||
|
" idx = np.where(status==1)[0]\n",
|
||||||
|
" prev_pts = prev_pts[idx]\n",
|
||||||
|
" curr_pts = curr_pts[idx]\n",
|
||||||
|
" \n",
|
||||||
|
" mat, _ = cv.estimateAffinePartial2D(prev_pts, curr_pts)\n",
|
||||||
|
" # traslation\n",
|
||||||
|
" dx = mat[0,2]\n",
|
||||||
|
" dy = mat[1,2]\n",
|
||||||
|
" # rotation angle\n",
|
||||||
|
" da = np.arctan2(mat[1,0], mat[0,0])\n",
|
||||||
|
" \n",
|
||||||
|
" transforms[i] = [dx,dy,da]\n",
|
||||||
|
" \n",
|
||||||
|
" prev_gray = curr_gray"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ba9fce7d",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przygotujemy też kilka funkcji pomocniczych. Posiadając serię transformacji wygładzimy ich poszczególne komponenty przy pomocy średniej kroczącej."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "0fd89a26",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def moving_average(values, radius): \n",
|
||||||
|
" window_size = 2 * radius + 1 \n",
|
||||||
|
" mask = np.ones(window_size)/window_size \n",
|
||||||
|
"\n",
|
||||||
|
" values_padded = np.lib.pad(values, (radius, radius), 'edge') \n",
|
||||||
|
" values_smoothed = np.convolve(values_padded, mask, mode='same') \n",
|
||||||
|
" \n",
|
||||||
|
" return values_smoothed[radius:-radius] # remove padding\n",
|
||||||
|
"\n",
|
||||||
|
"def smooth(trajectory, radius=50): \n",
|
||||||
|
" smoothed_trajectory = np.copy(trajectory) \n",
|
||||||
|
" for i in range(smoothed_trajectory.shape[1]):\n",
|
||||||
|
" smoothed_trajectory[:,i] = moving_average(trajectory[:,i], radius)\n",
|
||||||
|
"\n",
|
||||||
|
" return smoothed_trajectory"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9f4d1df0",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy teraz policzyć jakie mieliśmy transformacje względem początku filmu, wygładzić je poprzez średnią kroczącą, a następnie nanieść wynikowe różnice na poszczególne transformacje:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "efb52c4d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"trajectory = np.cumsum(transforms, axis=0)\n",
|
||||||
|
"smoothed_trajectory = smooth(trajectory) \n",
|
||||||
|
"\n",
|
||||||
|
"difference = smoothed_trajectory - trajectory\n",
|
||||||
|
"transforms_smooth = transforms + difference"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0ef3b313",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Ostatecznie na podstawie wygładzonych transformacji dostosowujemy poszczególne klatki filmu. Dodatkowo poprzez ustabilizowanie obrazu mogą pojawić się czarne obramowania na wynikowym obrazie, zatem poprzez niewielkie powiększenie obrazu zniwelujemy ten efekt:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b8d4528b",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"cap.set(cv.CAP_PROP_POS_FRAMES, 0) # back to first frame\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(n_frames-2):\n",
|
||||||
|
" success, frame = cap.read() \n",
|
||||||
|
" if not success:\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
" dx = transforms_smooth[i,0]\n",
|
||||||
|
" dy = transforms_smooth[i,1]\n",
|
||||||
|
" da = transforms_smooth[i,2]\n",
|
||||||
|
"\n",
|
||||||
|
" mat = np.zeros((2,3), np.float32)\n",
|
||||||
|
" mat[0,0] = np.cos(da)\n",
|
||||||
|
" mat[0,1] = -np.sin(da)\n",
|
||||||
|
" mat[1,0] = np.sin(da)\n",
|
||||||
|
" mat[1,1] = np.cos(da)\n",
|
||||||
|
" mat[0,2] = dx\n",
|
||||||
|
" mat[1,2] = dy\n",
|
||||||
|
"\n",
|
||||||
|
" frame_stabilized = cv.warpAffine(frame, mat, (width, height))\n",
|
||||||
|
" \n",
|
||||||
|
" mat = cv.getRotationMatrix2D((width/2, height/2), 0, 1.1)\n",
|
||||||
|
" frame_stabilized = cv.warpAffine(frame_stabilized, mat, (width, height))\n",
|
||||||
|
"\n",
|
||||||
|
" frame_out = cv.hconcat([frame, frame_stabilized]) # frame by frame\n",
|
||||||
|
" \n",
|
||||||
|
" out.write(frame_out)\n",
|
||||||
|
" \n",
|
||||||
|
"out.release()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c204ec3a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Na potrzeby wyświetlenie wynikowego filmu w przeglądarce, użyjemy kodeka H264:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "4b58bce1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!ffmpeg -y -hide_banner -loglevel warning -nostats -i vid/gen-protest.avi -vcodec libx264 vid/gen-protest.mp4"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "fce1d5cd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wynikowy film:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "041e29a5",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"IPython.display.Video(\"vid/gen-protest.mp4\", width=800)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5fd935d2",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Śledzenie obiektów"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "dbeb1ae1",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Załóżmy, że chcemy na poniższym filmie śledzić przemieszczanie się piłkarek:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "7a31e78d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"IPython.display.Video(\"vid/football.mp4\", width=800)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "2fc45895",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Biblioteka OpenCV posiada [kilka algorytmów](https://docs.opencv.org/4.5.3/dc/d6b/group__tracking__legacy.html) pozwalających na śledzenie obiektów. Poniżej użyjemy algorytmu [*Multiple Instance Learning*](https://docs.opencv.org/4.5.3/d9/dbc/classcv_1_1legacy_1_1TrackerMIL.html):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "a35e3ad7",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"video = cv.VideoCapture(\"vid/football.mp4\")\n",
|
||||||
|
"_, frame = video.read()\n",
|
||||||
|
"\n",
|
||||||
|
"bbox = (45, 350, 120, 270)\n",
|
||||||
|
"\n",
|
||||||
|
"tracker = cv.legacy.TrackerMIL_create()\n",
|
||||||
|
"tracker.init(frame, bbox)\n",
|
||||||
|
"\n",
|
||||||
|
"pt_1 = (int(bbox[0]), int(bbox[1]))\n",
|
||||||
|
"pt_2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))\n",
|
||||||
|
"cv.rectangle(frame, pt_1, pt_2, (0, 0, 255), 4, cv.LINE_8)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(7,7))\n",
|
||||||
|
"plt.imshow(frame[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "934dde10",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy sprawdzić wyniki pomiędzy poszczególnymi klatkami, jednak tutaj na potrzeby prezentacji dodajmy odstęp co 10 klatek aby można było zauwazyć ruch. Dodatkowo możemy sprawdzić względną prędkość działania algorytmu:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b7650ee1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"count = 50\n",
|
||||||
|
"\n",
|
||||||
|
"while count > 0:\n",
|
||||||
|
"\n",
|
||||||
|
" ok, frame = video.read()\n",
|
||||||
|
" if not ok:\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
" timer = cv.getTickCount()\n",
|
||||||
|
" \n",
|
||||||
|
" ok, bbox = tracker.update(frame)\n",
|
||||||
|
" \n",
|
||||||
|
" fps = cv.getTickFrequency() / (cv.getTickCount() - timer);\n",
|
||||||
|
"\n",
|
||||||
|
" if ok:\n",
|
||||||
|
" pt_1 = (int(bbox[0]), int(bbox[1]))\n",
|
||||||
|
" pt_2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))\n",
|
||||||
|
" cv.rectangle(frame, pt_1, pt_2, (0,0,255), 4, cv.LINE_8)\n",
|
||||||
|
" else :\n",
|
||||||
|
" cv.putText(frame, \"Tracking failure\", (20, 180), \n",
|
||||||
|
" cv.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), cv.LINE_AA)\n",
|
||||||
|
"\n",
|
||||||
|
" cv.putText(frame, \"FPS : \" + str(int(fps)), (20,50), \n",
|
||||||
|
" cv.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), cv.LINE_AA)\n",
|
||||||
|
"\n",
|
||||||
|
" if count % 10 == 0:\n",
|
||||||
|
" plt.figure(figsize=(7,7))\n",
|
||||||
|
" plt.imshow(frame[:,:,::-1])\n",
|
||||||
|
" count -= 1\n",
|
||||||
|
"\n",
|
||||||
|
"video.release()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "bf17ff66",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Istnieje też możliwość jednoczesnego śledzenia kilku obiektów:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d2e56fa9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"video = cv.VideoCapture(\"vid/football.mp4\")\n",
|
||||||
|
"_, frame = video.read()\n",
|
||||||
|
"\n",
|
||||||
|
"bboxes = [(45, 350, 120, 270), (755, 350, 120, 270)]\n",
|
||||||
|
"colors = [(0, 0, 255), (0, 255, 0)]\n",
|
||||||
|
"\n",
|
||||||
|
"multi_tracker = cv.legacy.MultiTracker_create()\n",
|
||||||
|
"\n",
|
||||||
|
"for bbox in bboxes:\n",
|
||||||
|
" multi_tracker.add(cv.legacy.TrackerMIL_create(), frame, bbox)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "9f989b8a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"count = 50\n",
|
||||||
|
"\n",
|
||||||
|
"while count > 0:\n",
|
||||||
|
"\n",
|
||||||
|
" ok, frame = video.read()\n",
|
||||||
|
" if not ok:\n",
|
||||||
|
" break\n",
|
||||||
|
"\n",
|
||||||
|
" timer = cv.getTickCount()\n",
|
||||||
|
" \n",
|
||||||
|
" _, boxes = multi_tracker.update(frame)\n",
|
||||||
|
" \n",
|
||||||
|
" for i, bbox in enumerate(boxes):\n",
|
||||||
|
" pt_1 = (int(bbox[0]), int(bbox[1]))\n",
|
||||||
|
" pt_2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))\n",
|
||||||
|
" cv.rectangle(frame, pt_1, pt_2, colors[i], 4, cv.LINE_8)\n",
|
||||||
|
"\n",
|
||||||
|
" if count % 10 == 0:\n",
|
||||||
|
" plt.figure(figsize=(7,7))\n",
|
||||||
|
" plt.imshow(frame[:,:,::-1])\n",
|
||||||
|
" count -= 1\n",
|
||||||
|
"\n",
|
||||||
|
"video.release()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "b1d84a3a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Zadanie 1\n",
|
||||||
|
"\n",
|
||||||
|
"Dla filmu `vid/football.mp4` porównaj jakość śledzenia dla dostępnych algorytmów. Wyniki zapisz na jednym filmie.\n",
|
||||||
|
"\n",
|
||||||
|
"![Porówanie algorytmów śledzenia obiektów](img/football-multi.png)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"author": "Andrzej Wójtowicz",
|
||||||
|
"email": "andre@amu.edu.pl",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"lang": "pl",
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.3"
|
||||||
|
},
|
||||||
|
"subtitle": "07. Analiza wideo [laboratoria]",
|
||||||
|
"title": "Widzenie komputerowe",
|
||||||
|
"year": "2021"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
459
wko-08.ipynb
Normal file
459
wko-08.ipynb
Normal file
@ -0,0 +1,459 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "909d3c02",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![Logo 1](img/aitech-logotyp-1.jpg)\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<h1> Widzenie komputerowe </h1>\n",
|
||||||
|
"<h2> 08. <i>Rozpoznawanie twarzy</i> [laboratoria]</h2> \n",
|
||||||
|
"<h3>Andrzej Wójtowicz (2021)</h3>\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"![Logo 2](img/aitech-logotyp-2.jpg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "7a9fde6b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W poniższych materiałach zaprezentujemy klasyczne metody rozpoznawania twarzy. Opisywane zagadnienia można odnaleźć w *5.2.3 Principal component analysis* R. Szeliski (2022) *Computer Vision: Algorithms and Applications* oraz [dokumentacji](https://docs.opencv.org/4.5.3/da/d60/tutorial_face_main.html).\n",
|
||||||
|
"\n",
|
||||||
|
"Na początku załadujmy niezbędne biblioteki."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "1d86977a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import cv2 as cv\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"%matplotlib inline\n",
|
||||||
|
"import sklearn.metrics\n",
|
||||||
|
"import ipywidgets\n",
|
||||||
|
"import os\n",
|
||||||
|
"import random"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c5a62135",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Rozpakujmy zbiór danych, na którym będziemy pracować:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "0e0f1723",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!cd datasets && unzip -qo yaleextb.zip"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e6a0efb1",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Nasz zbiór zawiera po kilkadziesiąt zdjęć kilkudziesięciu osób, które zostały sfotografowane w różnych warunkach oświetlenia. Wczytane zdjęcia podzielimy na zbiór treningowy i testowy w stosunku 3/1 oraz wyświetlimy kilka przykładowych zdjęć:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "7b775bbf",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"dataset_dir = \"datasets/yaleextb\"\n",
|
||||||
|
"\n",
|
||||||
|
"img_data = []\n",
|
||||||
|
"img_labels = []\n",
|
||||||
|
"\n",
|
||||||
|
"images = os.listdir(dataset_dir)\n",
|
||||||
|
"\n",
|
||||||
|
"n_examples = 15\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(1, 40):\n",
|
||||||
|
" i_str = str(i).zfill(2)\n",
|
||||||
|
" images_p = [img for img in images if img.startswith(f\"yaleB{i_str}\")]\n",
|
||||||
|
" \n",
|
||||||
|
" for img in images_p[:n_examples]:\n",
|
||||||
|
" img_data.append(cv.imread(f\"{dataset_dir}/{img}\", cv.IMREAD_GRAYSCALE))\n",
|
||||||
|
" img_labels.append(i)\n",
|
||||||
|
"\n",
|
||||||
|
"random.seed(1337)\n",
|
||||||
|
"selector = random.choices([False, True], k=len(images), weights=[3, 1])\n",
|
||||||
|
"train_data = [x for x, y in zip(img_data, selector) if not y]\n",
|
||||||
|
"train_labels = [x for x, y in zip(img_labels, selector) if not y]\n",
|
||||||
|
"test_data = [x for x, y in zip(img_data, selector) if y]\n",
|
||||||
|
"test_labels = [x for x, y in zip(img_labels, selector) if y]\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(12,5))\n",
|
||||||
|
"for i in range(4):\n",
|
||||||
|
" plt.subplot(251 + i)\n",
|
||||||
|
" plt.imshow(train_data[i], cmap='gray');\n",
|
||||||
|
"for i in range(4):\n",
|
||||||
|
" plt.subplot(256 + i)\n",
|
||||||
|
" plt.imshow(train_data[-i-20], cmap='gray');"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "6e315630",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Pierwszym modelem jest *Eigenfaces* zaimplementowany w [`EigenFaceRecognizer`](https://docs.opencv.org/4.5.3/dd/d7c/classcv_1_1face_1_1EigenFaceRecognizer.html). Główny pomysł polega na użyciu PCA do redukcji wymiarów. W naszym przykładzie zachowamy 60 wektorów własnych."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "0473c8ae",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.face.EigenFaceRecognizer_create(60)\n",
|
||||||
|
"model.train(np.array(train_data), np.array(train_labels))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "7a753f2d",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Zachowane wektory własne możemy zwizualizować:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "f797fe86",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"img_shape = train_data[0].shape\n",
|
||||||
|
"plt.figure(figsize=(12,5))\n",
|
||||||
|
"for i in range(5):\n",
|
||||||
|
" e_v = model.getEigenVectors()[:,i]\n",
|
||||||
|
" e_v = np.reshape(e_v, img_shape)\n",
|
||||||
|
"\n",
|
||||||
|
" plt.subplot(151+i)\n",
|
||||||
|
" plt.imshow(e_v, cmap='gray');"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "19545151",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy zobaczyć jakie potencjalne twarze znajdują się w naszej przestrzeni. Do *uśrednionej* twarzy dodajemy kolejne wektory własne z odpowiednimi wagami. Poniżej mamy przykład wykorzystujący 6 wektorów:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "5265f337",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"mean = model.getMean()\n",
|
||||||
|
"W = model.getEigenVectors()\n",
|
||||||
|
"\n",
|
||||||
|
"def generate_face(**args):\n",
|
||||||
|
" img = mean.copy()\n",
|
||||||
|
" for i, k in enumerate(args.keys()):\n",
|
||||||
|
" img = np.add(img, W[:,i]*(10*args[k]))\n",
|
||||||
|
" \n",
|
||||||
|
" img = np.reshape(img, img_shape)\n",
|
||||||
|
" plt.figure(figsize=(5,5))\n",
|
||||||
|
" plt.imshow(img, cmap='gray')\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
" \n",
|
||||||
|
"ipywidgets.interactive(generate_face, \n",
|
||||||
|
" w_0=ipywidgets.IntSlider(min=-128, max=128),\n",
|
||||||
|
" w_1=ipywidgets.IntSlider(min=-128, max=128),\n",
|
||||||
|
" w_2=ipywidgets.IntSlider(min=-128, max=128),\n",
|
||||||
|
" w_3=ipywidgets.IntSlider(min=-128, max=128),\n",
|
||||||
|
" w_4=ipywidgets.IntSlider(min=-128, max=128),\n",
|
||||||
|
" w_5=ipywidgets.IntSlider(min=-128, max=128))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "fd4bdce6",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy teraz spróbować zrobić rekonstrukcję np. pierwszej twarzy ze zbioru treningowego. Pobieramy dla niej projekcje (wagi) z naszego modelu i podobnie jak wyżej wykorzystujemy uśrednioną twarz i wektory własne. Możemy zobaczyć, że użycie większej liczby wektorów powoduje zwiększenie precyzji rekonstrukcji:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "2619c6f9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"pro = model.getProjections()[0]\n",
|
||||||
|
"\n",
|
||||||
|
"def reconstruct_face(k):\n",
|
||||||
|
" img = mean.copy()\n",
|
||||||
|
"\n",
|
||||||
|
" for i in range(k):\n",
|
||||||
|
" img = np.add(img, W[:,i]*pro[0,i])\n",
|
||||||
|
" \n",
|
||||||
|
" return img\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(12,6))\n",
|
||||||
|
"for i in range(6):\n",
|
||||||
|
" k = (i+1)*10\n",
|
||||||
|
" r_face = np.reshape(reconstruct_face(k), img_shape)\n",
|
||||||
|
" j = 0 if i <= 4 else 10\n",
|
||||||
|
" plt.subplot(151+i+100)\n",
|
||||||
|
" plt.imshow(r_face, cmap='gray')\n",
|
||||||
|
" plt.title(f\"k = {k}\")\n",
|
||||||
|
" \n",
|
||||||
|
"plt.subplot(257)\n",
|
||||||
|
"plt.imshow(train_data[0], cmap='gray');\n",
|
||||||
|
"plt.title(\"original\");"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ae87277a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Spróbujmy teraz odnaleźć osobny znajdujące się na dwóch przykładowych obrazach ze zbioru testowego. Dla nieznanej twarzy obliczamy projekcje i szukamy metodą najbliższego sąsiada projekcji ze zbioru treningowego. Poniżej mamy przykład z poprawnym rozpoznaniem osoby oraz z niepoprawnym rozpoznaniem:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "828f3134",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def find_face(query_id):\n",
|
||||||
|
" query_face = test_data[query_id]\n",
|
||||||
|
" query_label = test_labels[query_id]\n",
|
||||||
|
"\n",
|
||||||
|
" x = np.reshape(query_face, mean.shape)\n",
|
||||||
|
" x_coeff = np.dot(x - mean, W)\n",
|
||||||
|
"\n",
|
||||||
|
" best_face = None\n",
|
||||||
|
" best_label = None\n",
|
||||||
|
" best_dist = float('inf')\n",
|
||||||
|
"\n",
|
||||||
|
" for i, p in enumerate(model.getProjections()):\n",
|
||||||
|
" dist = np.linalg.norm(np.reshape(p, 60) - np.reshape(x_coeff, 60))\n",
|
||||||
|
"\n",
|
||||||
|
" if dist < best_dist:\n",
|
||||||
|
" best_face = train_data[i]\n",
|
||||||
|
" best_label = train_labels[i]\n",
|
||||||
|
" best_dist = dist\n",
|
||||||
|
" \n",
|
||||||
|
" return query_face, query_label, best_face, best_label\n",
|
||||||
|
"\n",
|
||||||
|
"qf_1, ql_1, bf_1, bl_1 = find_face(45)\n",
|
||||||
|
"qf_2, ql_2, bf_2, bl_2 = find_face(10)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(8,11))\n",
|
||||||
|
"plt.subplot(221)\n",
|
||||||
|
"plt.imshow(qf_1, cmap='gray')\n",
|
||||||
|
"plt.title(f\"Face 1: query label = {ql_1}\")\n",
|
||||||
|
"plt.subplot(222)\n",
|
||||||
|
"plt.imshow(bf_1, cmap='gray');\n",
|
||||||
|
"plt.title(f\"Face 1: best label = {bl_1}\")\n",
|
||||||
|
"plt.subplot(223)\n",
|
||||||
|
"plt.imshow(qf_2, cmap='gray')\n",
|
||||||
|
"plt.title(f\"Face 2: query label = {ql_2}\")\n",
|
||||||
|
"plt.subplot(224)\n",
|
||||||
|
"plt.imshow(bf_2, cmap='gray');\n",
|
||||||
|
"plt.title(f\"Face 2: best label = {bl_2}\");"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "43f9a8e5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Bardziej kompaktowe wykonanie predykcji możemy uzyskać poprzez metodę `predict()`:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "bf736bdd",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(test_labels[45], model.predict(test_data[45])[0])\n",
|
||||||
|
"print(test_labels[10], model.predict(test_data[10])[0])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "eeaf62b5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Jak widać poniżej, metoda ta nie uzyskuje szczególnie zadowalających wyników (generalnie słabo sobie radzi w sytuacji zmian oświetlenia):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "12c65438",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"predictions = []\n",
|
||||||
|
"for test_img in test_data:\n",
|
||||||
|
" p_label, p_conf = model.predict(test_img)\n",
|
||||||
|
" predictions.append(p_label)\n",
|
||||||
|
" \n",
|
||||||
|
"print(f\"Accuracy: {sklearn.metrics.accuracy_score(test_labels, predictions) * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ea5d879b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Poniżej krótko zaprezentujemy jeszcze dwa rozwinięcia tego algorytmu. Pierwszym z nich jest *Fisherfaces* zaimplementowany w [`FisherFaceRecognizer`](https://docs.opencv.org/4.5.3/d2/de9/classcv_1_1face_1_1FisherFaceRecognizer.html). Tym razem przy pomocy LDA chcemy dodatkowo uwzględnić rozrzut pomiędzy klasami (por. [przykład](https://sthalles.github.io/fisher-linear-discriminant/)). Poniżej tworzymy model z 40 komponentami:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "4eb5b746",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.face.FisherFaceRecognizer_create(40)\n",
|
||||||
|
"model.train(np.array(train_data), np.array(train_labels))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e9f334be",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Zauważmy, że uzyskujemy tutaj ponad dwukrotnie lepszy wynik:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "96faa192",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"predictions = []\n",
|
||||||
|
"for test_img in test_data:\n",
|
||||||
|
" p_label, p_conf = model.predict(test_img)\n",
|
||||||
|
" predictions.append(p_label)\n",
|
||||||
|
" \n",
|
||||||
|
"print(f\"Accuracy: {sklearn.metrics.accuracy_score(test_labels, predictions) * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "02220e5f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Dalszym rozwinięciem jest model *Local Binary Patterns Histograms* (LBPH) zaimplementowany w [`LBPHFaceRecognizer`](https://docs.opencv.org/4.5.3/df/d25/classcv_1_1face_1_1LBPHFaceRecognizer.html). W tym wypadku chcemy np. uwzględnić możliwość innego oświetlenia osób niż taki, który występuje w naszym zbiorze treningowym. Podobnie jak wcześniej zależy nam na redukcji wymiarów, ale tym razem uzyskamy to poprzez wyliczanie cech (progowanie) dla poszczególnych pikseli w zadanych regionach."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "61eeffdf",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.face.LBPHFaceRecognizer_create(radius=10, neighbors=10, grid_x=32, grid_y=32)\n",
|
||||||
|
"model.train(np.array(train_data), np.array(train_labels))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0d64cb5a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Uzyskany wynik jest o kilka punktów procentowy lepszy od poprzedniego modelu, jednak możemy zauważyć, że zmiana domyślnych parametrów na takie, które zwiększają precyzję, powoduje również zwiększenie czasu potrzebnego na wykonanie predykcji:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "ca2e319d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"predictions = []\n",
|
||||||
|
"for test_img in test_data:\n",
|
||||||
|
" p_label, p_conf = model.predict(test_img)\n",
|
||||||
|
" predictions.append(p_label)\n",
|
||||||
|
" \n",
|
||||||
|
"print(f\"Accuracy: {sklearn.metrics.accuracy_score(test_labels, predictions) * 100:.2f} %\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "00196405",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Zadanie 1\n",
|
||||||
|
"\n",
|
||||||
|
"W katalogu `datasets` znajduje się zbiór zdjęć `att_faces`. Sprawdź jakiego typu są to zdjęcia oraz jak powyższe algorytmy działają na tym zbiorze."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "51b8a256",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# miejsce na eksperymenty"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"author": "Andrzej Wójtowicz",
|
||||||
|
"email": "andre@amu.edu.pl",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"lang": "pl",
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.3"
|
||||||
|
},
|
||||||
|
"subtitle": "08. Rozpoznawanie twarzy [laboratoria]",
|
||||||
|
"title": "Widzenie komputerowe",
|
||||||
|
"year": "2021"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
519
wko-09.ipynb
Normal file
519
wko-09.ipynb
Normal file
@ -0,0 +1,519 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "80377b3b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![Logo 1](img/aitech-logotyp-1.jpg)\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<h1> Widzenie komputerowe </h1>\n",
|
||||||
|
"<h2> 09. <i>Metody głębokiego uczenia (1)</i> [laboratoria]</h2> \n",
|
||||||
|
"<h3>Andrzej Wójtowicz (2021)</h3>\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"![Logo 2](img/aitech-logotyp-2.jpg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "07159136",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W poniższym materiale zobaczymy w jaki sposób korzystać z metod głębokiego uczenia sieci neuronowych w pakiecie OpenCV.\n",
|
||||||
|
"\n",
|
||||||
|
"Na początku załadujmy niezbędne biblioteki:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b2e906f0",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import cv2 as cv\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"%matplotlib inline"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f4348bc5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"OpenCV wspiera [wiele](https://github.com/opencv/opencv/wiki/Deep-Learning-in-OpenCV) bibliotek i modeli sieci neuronowych. Modele trenuje się poza OpenCV - bibliotekę wykorzystuje się tylko do predykcji, aczkolwiek sama w sobie ma całkiem sporo możliwych optymalizacji w porównaniu do źródłowych bibliotek neuronowych, więc predykcja może być tutaj faktycznie szybsza.\n",
|
||||||
|
"\n",
|
||||||
|
"Pliki z modelami i danymi pomocniczymi będziemy pobierali z sieci i będziemy je zapisywali w katalogu `dnn`:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "42b85f55",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!mkdir -p dnn"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ac09b098",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Klasyfikacja obrazów\n",
|
||||||
|
"\n",
|
||||||
|
"Spróbujemy wykorzystać sieć do klasyfikacji obrazów wyuczonej na zbiorze [ImageNet](https://www.image-net.org/). Pobierzmy plik zawierający opis 1000 możliwych klas:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "85b1b68c",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/classification_classes_ILSVRC2012.txt https://raw.githubusercontent.com/opencv/opencv/master/samples/data/dnn/classification_classes_ILSVRC2012.txt "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "fd0c577b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Spójrzmy na pierwsze pięć klas w pliku:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "fb0d0546",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"with open('dnn/classification_classes_ILSVRC2012.txt', 'r') as f_fd:\n",
|
||||||
|
" classes = f_fd.read().splitlines()\n",
|
||||||
|
" \n",
|
||||||
|
"print(len(classes), classes[:5])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5b0ee6ff",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Do klasyfikacji użyjemy sieci [DenseNet](https://arxiv.org/abs/1608.06993). Pobierzemy jedną z mniejszych [reimplementacji](https://github.com/shicai/DenseNet-Caffe), która jest hostowana m.in. na Google Drive (musimy doinstalować jeden pakiet):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "fb2bf2a1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!pip3 install --user --disable-pip-version-check gdown"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "27996509",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import gdown\n",
|
||||||
|
"\n",
|
||||||
|
"url = 'https://drive.google.com/uc?id=0B7ubpZO7HnlCcHlfNmJkU2VPelE'\n",
|
||||||
|
"output = 'dnn/DenseNet_121.caffemodel'\n",
|
||||||
|
"gdown.download(url, output, quiet=False)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "648ec9c9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/DenseNet_121.prototxt https://raw.githubusercontent.com/shicai/DenseNet-Caffe/master/DenseNet_121.prototxt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f7294c54",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Konkretne biblioteki neuronowe posiadają dedykowane funkcje do ładowania modeli, np. [`readNetFromCaffe()`](https://docs.opencv.org/4.5.3/d6/d0f/group__dnn.html#ga29d0ea5e52b1d1a6c2681e3f7d68473a) lub [`readNetFromTorch()`](https://docs.opencv.org/4.5.3/d6/d0f/group__dnn.html#ga65a1da76cb7d6852bdf7abbd96f19084), jednak można też użyć ogólnej [`readNet()`](https://docs.opencv.org/4.5.3/d6/d0f/group__dnn.html#ga3b34fe7a29494a6a4295c169a7d32422):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "6fd2d6b3",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.dnn.readNet(model='dnn/DenseNet_121.prototxt', config='dnn/DenseNet_121.caffemodel', framework='Caffe')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "fe22fd6f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Spróbujemy sklasyfikować poniższy obraz:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "6ace4606",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread('img/flamingo.jpg')\n",
|
||||||
|
"plt.figure(figsize=[5,5])\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e51db3ac",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Aby móc przepuścić obraz przez sieć musimy zmienić jego formę reprezentacji poprzez funkcję [`blobFromImage()`](https://docs.opencv.org/4.5.3/d6/d0f/group__dnn.html#ga29f34df9376379a603acd8df581ac8d7). Aby uzyskać sensowne dane musimy ustawić parametry dotyczące preprocessingu (informacje o tym są zawarte na [stronie modelu](https://github.com/shicai/DenseNet-Caffe)):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d4e945ae",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image_blob = cv.dnn.blobFromImage(image=image, scalefactor=0.017, size=(224, 224), mean=(104, 117, 123), \n",
|
||||||
|
" swapRB=False, crop=False)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "625aebdd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Ustawiamy dane wejściowe w naszej sieci i pobieramy obliczone wartości:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "753333a1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model.setInput(image_blob)\n",
|
||||||
|
"outputs = model.forward()[0]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "34316ddb",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wyliczamy która klasa jest najbardziej prawdopodobna:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "13423a6d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"outputs = outputs.reshape(1000, 1)\n",
|
||||||
|
"\n",
|
||||||
|
"label_id = np.argmax(outputs)\n",
|
||||||
|
"\n",
|
||||||
|
"probs = np.exp(outputs) / np.sum(np.exp(outputs))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "874c1b1d",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wynik:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "ec75a3c5",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"plt.imshow(image[:,:,::-1])\n",
|
||||||
|
"plt.title(classes[label_id])\n",
|
||||||
|
"print(\"{:.2f} %\".format(np.max(probs) * 100.0))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3808c42c",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Wykrywanie twarzy\n",
|
||||||
|
"\n",
|
||||||
|
"Do wykrywania twarzy użyjemy sieci bazującej na [SSD](https://github.com/weiliu89/caffe/tree/ssd):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "3c0df387",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/res10_300x300_ssd_iter_140000_fp16.caffemodel https://raw.githubusercontent.com/opencv/opencv_3rdparty/dnn_samples_face_detector_20180205_fp16/res10_300x300_ssd_iter_140000_fp16.caffemodel\n",
|
||||||
|
"!wget -q --show-progress -O dnn/res10_300x300_ssd_iter_140000_fp16.prototxt https://raw.githubusercontent.com/opencv/opencv/master/samples/dnn/face_detector/deploy.prototxt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c6142f6e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Ładujemy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "60d41efb",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.dnn.readNet(model='dnn/res10_300x300_ssd_iter_140000_fp16.prototxt', config='dnn/res10_300x300_ssd_iter_140000_fp16.caffemodel', framework='Caffe')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ad612cc6",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Będziemy chcieli wykryć twarze na poniższym obrazie:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b404d8c4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread('img/people.jpg')\n",
|
||||||
|
"plt.figure(figsize=[7,7])\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "a77f8e64",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Znajdujemy twarze i oznaczamy je na zdjęciu (za próg przyjęliśmy 0.5; zob. informacje o [preprocessingu](https://github.com/opencv/opencv/tree/master/samples/dnn#face-detection)):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "1d16f230",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"height, width, _ = image.shape\n",
|
||||||
|
"\n",
|
||||||
|
"image_blob = cv.dnn.blobFromImage(image, scalefactor=1.0, size=(300, 300), mean=[104, 177, 123], \n",
|
||||||
|
" swapRB=False, crop=False)\n",
|
||||||
|
"\n",
|
||||||
|
"model.setInput(image_blob)\n",
|
||||||
|
"\n",
|
||||||
|
"detections = model.forward()\n",
|
||||||
|
"\n",
|
||||||
|
"image_out = image.copy()\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(detections.shape[2]):\n",
|
||||||
|
" confidence = detections[0, 0, i, 2]\n",
|
||||||
|
" if confidence > 0.5:\n",
|
||||||
|
"\n",
|
||||||
|
" box = detections[0, 0, i, 3:7] * np.array([width, height, width, height])\n",
|
||||||
|
" (x1, y1, x2, y2) = box.astype('int')\n",
|
||||||
|
"\n",
|
||||||
|
" cv.rectangle(image_out, (x1, y1), (x2, y2), (0, 255, 0), 6)\n",
|
||||||
|
" label = '{:.3f}'.format(confidence)\n",
|
||||||
|
" label_size, base_line = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 3.0, 1)\n",
|
||||||
|
" cv.rectangle(image_out, (x1, y1 - label_size[1]), (x1 + label_size[0], y1 + base_line), \n",
|
||||||
|
" (255, 255, 255), cv.FILLED)\n",
|
||||||
|
" cv.putText(image_out, label, (x1, y1), cv.FONT_HERSHEY_SIMPLEX, 3.0, (0, 0, 0))\n",
|
||||||
|
" \n",
|
||||||
|
"plt.figure(figsize=[12,12])\n",
|
||||||
|
"plt.imshow(image_out[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "590841cd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Punkty charakterystyczne twarzy\n",
|
||||||
|
"\n",
|
||||||
|
"W OpenCV jest możliwość wykrywania punktów charakterystycznych twarzy (ang. *facial landmarks*). Użyjemy zaimplementowanego [modelu](http://www.jiansun.org/papers/CVPR14_FaceAlignment.pdf) podczas Google Summer of Code przy użyciu [`createFacemarkLBF()`](https://docs.opencv.org/4.5.3/d4/d48/namespacecv_1_1face.html#a0bec73a729ed878430c2feb9ce65bc2a):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "8534a399",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/lbfmodel.yaml https://raw.githubusercontent.com/kurnianggoro/GSOC2017/master/data/lbfmodel.yaml"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "c2971f10",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"landmark_detector = cv.face.createFacemarkLBF()\n",
|
||||||
|
"landmark_detector.loadModel('dnn/lbfmodel.yaml')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "761dbc15",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Ograniczamy nasze poszukiwania do twarzy:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "39215601",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"faces = []\n",
|
||||||
|
"\n",
|
||||||
|
"for detection in detections[0][0]:\n",
|
||||||
|
" if detection[2] >= 0.5:\n",
|
||||||
|
" left = detection[3] * width\n",
|
||||||
|
" top = detection[4] * height\n",
|
||||||
|
" right = detection[5] * width\n",
|
||||||
|
" bottom = detection[6] * height\n",
|
||||||
|
"\n",
|
||||||
|
" face_w = right - left\n",
|
||||||
|
" face_h = bottom - top\n",
|
||||||
|
"\n",
|
||||||
|
" face_roi = (left, top, face_w, face_h)\n",
|
||||||
|
" faces.append(face_roi)\n",
|
||||||
|
"\n",
|
||||||
|
"faces = np.array(faces).astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
"_, landmarks_list = landmark_detector.fit(image, faces)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "56aa90c9",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Model generuje 68 punktów charakterycznych, które możemy zwizualizować:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "6d3ab726",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image_display = image.copy()\n",
|
||||||
|
"landmarks = landmarks_list[0][0].astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
"for idx, landmark in enumerate(landmarks):\n",
|
||||||
|
" cv.circle(image_display, landmark, 2, (0,255,255), -1)\n",
|
||||||
|
" cv.putText(image_display, str(idx), landmark, cv.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1, \n",
|
||||||
|
" cv.LINE_AA)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(10,10))\n",
|
||||||
|
"plt.imshow(image_display[700:1050,500:910,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "7cee8969",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Jeśli nie potrzebujemy numeracji, to możemy użyć prostszego podejścia, tj. funkcji [`drawFacemarks()`](https://docs.opencv.org/4.5.3/db/d7c/group__face.html#ga318d9669d5ed4dfc6ab9fae2715310f5):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "1039e253",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image_display = image.copy()\n",
|
||||||
|
"for landmarks_set in landmarks_list:\n",
|
||||||
|
" cv.face.drawFacemarks(image_display, landmarks_set, (0, 255, 0))\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(10,10))\n",
|
||||||
|
"plt.imshow(image_display[500:1050,500:1610,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "db16a1bf",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Zadanie 1\n",
|
||||||
|
"\n",
|
||||||
|
"W katalogu `vid` znajdują się filmy `blinking-*.mp4`. Napisz program do wykrywania mrugnięć. Opcjonalnie możesz użyć *eye aspect ratio* z [tego artykułu](http://vision.fe.uni-lj.si/cvww2016/proceedings/papers/05.pdf) lub zaproponować własne rozwiązanie."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"author": "Andrzej Wójtowicz",
|
||||||
|
"email": "andre@amu.edu.pl",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"lang": "pl",
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.3"
|
||||||
|
},
|
||||||
|
"subtitle": "09. Wykrywanie i rozpoznawanie tekstu [laboratoria]",
|
||||||
|
"title": "Widzenie komputerowe",
|
||||||
|
"year": "2021"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
840
wko-10.ipynb
Normal file
840
wko-10.ipynb
Normal file
@ -0,0 +1,840 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3c8a4b52",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![Logo 1](img/aitech-logotyp-1.jpg)\n",
|
||||||
|
"<div class=\"alert alert-block alert-info\">\n",
|
||||||
|
"<h1> Widzenie komputerowe </h1>\n",
|
||||||
|
"<h2> 10. <i>Metody głębokiego uczenia (2)</i> [laboratoria]</h2> \n",
|
||||||
|
"<h3>Andrzej Wójtowicz (2021)</h3>\n",
|
||||||
|
"</div>\n",
|
||||||
|
"\n",
|
||||||
|
"![Logo 2](img/aitech-logotyp-2.jpg)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "783d6d64",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W poniższym materiale zobaczymy w jaki sposób korzystać z wytrenowanych modeli sieci neuronowych w zagadnieniach związanych z wykrywaniem wielu obiektów, szacowaniem pozy człowieka, wykrywaniem i rozpoznawaniem tekstu oraz super rozdzielczością.\n",
|
||||||
|
"\n",
|
||||||
|
"Uwaga: realizacja poniższych treści będzie wymagała pobrania ok. 700 MB danych.\n",
|
||||||
|
"\n",
|
||||||
|
"Na początku załadujmy niezbędne biblioteki:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "ef18510f",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import cv2 as cv\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"%matplotlib inline"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e45afc56",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Pobrane pliki będziemy zapisywać w katalogu `dnn`:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "7aac31ef",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!mkdir -p dnn"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f792eb4f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Wykrywanie obiektów\n",
|
||||||
|
"\n",
|
||||||
|
"## SSD\n",
|
||||||
|
"\n",
|
||||||
|
"W poprzednich materiałach korzystaliśmy z [SSD](https://arxiv.org/pdf/1512.02325.pdf) do wykrywania wielu twarzy na zdjęciu. W poniższym przykładzie możemy zobaczyć użycie do wykrywania wielu obiektów - sieć została wytrenowana na zbiorze [Common Objects in Context](https://cocodataset.org/) (COCO). Użyjemy modelu dostępnego dla frameworku [Tensorflow](https://github.com/tensorflow/models/tree/master/research/object_detection) (inne modele możemy znaleźć w [Detection Model Zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md)):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "aa10b6fa",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/ssd_mobilenet_v2_coco_2018_03_29.tar.gz http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz\n",
|
||||||
|
"!cd dnn && tar xzf ssd_mobilenet_v2_coco_2018_03_29.tar.gz && rm ssd_mobilenet_v2_coco_2018_03_29.tar.gz"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "99ec1efa",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Pobraliśmy model i generujemy konfigurację:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "eac9a8da",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/ssd_mobilenet_v2_coco_2018_03_29/tf_text_graph_ssd.py https://raw.githubusercontent.com/opencv/opencv/4.5.3/samples/dnn/tf_text_graph_ssd.py\n",
|
||||||
|
"!wget -q --show-progress -O dnn/ssd_mobilenet_v2_coco_2018_03_29/tf_text_graph_common.py https://raw.githubusercontent.com/opencv/opencv/4.5.3/samples/dnn/tf_text_graph_common.py\n",
|
||||||
|
"!cd dnn/ssd_mobilenet_v2_coco_2018_03_29 && python3 tf_text_graph_ssd.py --input frozen_inference_graph.pb --output net.pbtxt --config pipeline.config"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "232e2987",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wczytujemy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "9b4180e5",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.dnn.readNetFromTensorflow(\"dnn/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb\",\n",
|
||||||
|
" \"dnn/ssd_mobilenet_v2_coco_2018_03_29/net.pbtxt\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0bbfd2a4",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Pobieramy i wczytujemy etykiety klas obiektów:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "17335a42",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q -O - https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/data/mscoco_complete_label_map.pbtxt | grep display_name | grep -o '\".*\"' | tr -d '\"' > dnn/ssd_mobilenet_v2_coco_2018_03_29/coco-labels.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "662e1a33",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"with open('dnn/ssd_mobilenet_v2_coco_2018_03_29/coco-labels.txt', 'r') as f_fd:\n",
|
||||||
|
" classes = f_fd.read().splitlines()\n",
|
||||||
|
" \n",
|
||||||
|
"print(len(classes), classes[:5])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "94cace8a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Spróbujemy sprawdzić jakie obiekty znajdują się na poniższym zdjęciu:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "91834aba",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread('img/messi5.jpg')\n",
|
||||||
|
"plt.figure(figsize=[7,7])\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "43774ae3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Sieć zwraca nam listę obiektów z oznaczeniem współrzędnych na zdjęciu oraz identyfikatorem obiektu (ustawiliśmy próg odcięcia na 0.5):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "84652c91",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"height, width, _ = image.shape\n",
|
||||||
|
"\n",
|
||||||
|
"image_blob = cv.dnn.blobFromImage(image=image, scalefactor=1, size=(300, 300), mean=(0,0,0), \n",
|
||||||
|
" swapRB=True, crop=False)\n",
|
||||||
|
"\n",
|
||||||
|
"model.setInput(image_blob)\n",
|
||||||
|
"detections = model.forward()\n",
|
||||||
|
"\n",
|
||||||
|
"image_out = image.copy()\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(detections.shape[2]):\n",
|
||||||
|
" confidence = detections[0, 0, i, 2]\n",
|
||||||
|
" if confidence > 0.5:\n",
|
||||||
|
"\n",
|
||||||
|
" box = detections[0, 0, i, 3:7] * np.array([width, height, width, height])\n",
|
||||||
|
" (x1, y1, x2, y2) = box.astype('int')\n",
|
||||||
|
" \n",
|
||||||
|
" class_id = int(detections[0, 0, i, 1])\n",
|
||||||
|
"\n",
|
||||||
|
" cv.rectangle(image_out, (x1, y1), (x2, y2), (0, 255, 0), 6)\n",
|
||||||
|
" label = '{:} ({:.3f})'.format(classes[class_id], confidence)\n",
|
||||||
|
" label_size, base_line = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.65, 1)\n",
|
||||||
|
" cv.rectangle(image_out, (x1, y1 - label_size[1]), (x1 + label_size[0], y1 + base_line), \n",
|
||||||
|
" (255, 255, 255), cv.FILLED)\n",
|
||||||
|
" cv.putText(image_out, label, (x1, y1), cv.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 0))\n",
|
||||||
|
" \n",
|
||||||
|
"plt.figure(figsize=[12,12])\n",
|
||||||
|
"plt.imshow(image_out[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3fa16e91",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## YOLOv4"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "27ce3522",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Innym popularnym modelem do wykrywania obiektów jest [You Only Look Once](https://github.com/AlexeyAB/darknet) (YOLO). Porównując YOLO do innych sieci, model ten nie analizuje poszczególnych regionów, ale patrzy na obraz całościowo, co w pewien sposób stanowi balans między szybkością a precyzją. Ze względu na tę cechę model ten dobrze nadaje się do wykrywania obiektów w czasie rzeczywistym. Model powinien dobrze sobie radzić gdy zostanie mu przedstawiona nieznana wcześniej reprezentacja obiektu (np. zacieniony) lub gdy obiekt znajduje się w otoczeniu innych nieoczekiwanych obiektów.\n",
|
||||||
|
"\n",
|
||||||
|
"YOLO jest dostępne w kilku wersjach, natomiast my sprawdzimy jak sobie radzi wersja kompaktowa:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "4c3e7fb1",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!mkdir -p dnn/yolo_v4_tiny\n",
|
||||||
|
"!wget -q --show-progress -O dnn/yolo_v4_tiny/yolov4-tiny.weights https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov4-tiny.weights\n",
|
||||||
|
"!wget -q --show-progress -O dnn/yolo_v4_tiny/yolov4-tiny.cfg https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg\n",
|
||||||
|
"!wget -q --show-progress -O dnn/yolo_v4_tiny/coco.names https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/coco.names"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9497b09c",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wczytujemy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "e8cc6a3a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.dnn.readNetFromDarknet(\"dnn/yolo_v4_tiny/yolov4-tiny.cfg\", \n",
|
||||||
|
" \"dnn/yolo_v4_tiny/yolov4-tiny.weights\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "df331450",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wczytujemy etykiety obiektów:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "8f01d354",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"with open('dnn/yolo_v4_tiny/coco.names', 'r') as f_fd:\n",
|
||||||
|
" classes = f_fd.read().splitlines()\n",
|
||||||
|
" \n",
|
||||||
|
"print(len(classes), classes[:5])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3fc5e3fc",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przetestujemy działanie na poniższym zdjęciu:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "df65dee0",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread('img/pedestrians.jpg')\n",
|
||||||
|
"plt.figure(figsize=[7,7])\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9fbb6325",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Podczas korzystania z tego modelu musimy się zmierzyć z kilkoma subtelnościami. Model wykorzystuje framework Darknet, więc musimy wskazać, że chodzi nam o predykcje pochodzące z ostatniej warstwy. Dodatkowo mamy kilka progów odcięcia do zdefiniowania, tj. miarę obiektowości (*objectness*), pewności (*confidence*) oraz tłumienia niemaksymalnego aby ograniczyć występowanie nakładających się na siebie ramek z wykrytymi obiektami (por. [`cv.dnn.NMSBoxes()`](https://docs.opencv.org/4.5.3/d6/d0f/group__dnn.html#ga9d118d70a1659af729d01b10233213ee)). Poniżej mamy wynik działania:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d8450888",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"height, width, _ = image.shape\n",
|
||||||
|
"\n",
|
||||||
|
"image_blob = cv.dnn.blobFromImage(image=image, scalefactor=1/255, size=(416, 416), mean=(0,0,0), \n",
|
||||||
|
" swapRB=True, crop=False)\n",
|
||||||
|
"\n",
|
||||||
|
"model.setInput(image_blob)\n",
|
||||||
|
"detections = model.forward([model.getLayerNames()[i[0] - 1] for i in model.getUnconnectedOutLayers()])\n",
|
||||||
|
"\n",
|
||||||
|
"image_out = image.copy()\n",
|
||||||
|
"\n",
|
||||||
|
"class_ids = []\n",
|
||||||
|
"confidences = []\n",
|
||||||
|
"boxes = []\n",
|
||||||
|
"\n",
|
||||||
|
"for out in detections:\n",
|
||||||
|
" for detection in out:\n",
|
||||||
|
" if detection[4] > 0.5: # objectness thr.\n",
|
||||||
|
" scores = detection[5:]\n",
|
||||||
|
" class_id = np.argmax(scores)\n",
|
||||||
|
" confidence = scores[class_id]\n",
|
||||||
|
" if confidence > 0.5: # confidence thr.\n",
|
||||||
|
" center_x = int(detection[0] * width)\n",
|
||||||
|
" center_y = int(detection[1] * height)\n",
|
||||||
|
" b_width = int(detection[2] * width)\n",
|
||||||
|
" b_height = int(detection[3] * height)\n",
|
||||||
|
"\n",
|
||||||
|
" b_left = int(center_x - b_width / 2)\n",
|
||||||
|
" b_top = int(center_y - b_height / 2)\n",
|
||||||
|
" class_ids.append(class_id)\n",
|
||||||
|
" confidences.append(float(confidence))\n",
|
||||||
|
" boxes.append([b_left, b_top, b_width, b_height])\n",
|
||||||
|
"\n",
|
||||||
|
"indices = cv.dnn.NMSBoxes(boxes, confidences, score_threshold=0.5, nms_threshold=0.5)\n",
|
||||||
|
"for i in indices:\n",
|
||||||
|
" idx = i[0]\n",
|
||||||
|
" box = boxes[idx]\n",
|
||||||
|
" x1 = box[0]\n",
|
||||||
|
" y1 = box[1]\n",
|
||||||
|
" x2 = box[0] + box[2]\n",
|
||||||
|
" y2 = box[1] + box[3]\n",
|
||||||
|
" cv.rectangle(image_out, (x1, y1), (x2, y2), (0, 255, 0), 6)\n",
|
||||||
|
" label = '{:} ({:.3f})'.format(classes[class_ids[idx]], confidences[idx])\n",
|
||||||
|
" \n",
|
||||||
|
" label_size, base_line = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.65, 1)\n",
|
||||||
|
" cv.rectangle(image_out, (x1, y1 - label_size[1]), (x1 + label_size[0], y1 + base_line), \n",
|
||||||
|
" (255, 255, 255), cv.FILLED)\n",
|
||||||
|
" cv.putText(image_out, label, (x1, y1), cv.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 0))\n",
|
||||||
|
" \n",
|
||||||
|
"plt.figure(figsize=[12,12])\n",
|
||||||
|
"plt.imshow(image_out[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "41e32b8e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Szacowanie pozy człowieka\n",
|
||||||
|
"\n",
|
||||||
|
"Kolejnym interesującym zagadnieniem jest szacowanie pozy człowieka (ang. *human pose estimation*) na podstawie zdjęcia. Celem jest tutaj wykrycie charakterystycznych punktów orientacyjnych, które mogą potem zostać wykorzystane np. treningu sportowego, kontroli gestów, korekcji postawy, itp. W tym celu wykorzystamy [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "6a3fedf2",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!mkdir -p dnn/openpose\n",
|
||||||
|
"!wget -q --show-progress -O dnn/openpose/pose_iter_160000.caffemodel http://posefs1.perception.cs.cmu.edu/Users/tsimon/Projects/coco/data/models/mpi/pose_iter_160000.caffemodel\n",
|
||||||
|
"!wget -q --show-progress -O dnn/openpose/pose_deploy_linevec_faster_4_stages.prototxt https://raw.githubusercontent.com/CMU-Perceptual-Computing-Lab/openpose/master/models/pose/mpi/pose_deploy_linevec_faster_4_stages.prototxt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "851c965b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wczytujemy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d55edcb8",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model = cv.dnn.readNetFromCaffe(\"dnn/openpose/pose_deploy_linevec_faster_4_stages.prototxt\",\n",
|
||||||
|
" \"dnn/openpose/pose_iter_160000.caffemodel\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c2c701c3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Będziemy chcieli przeanalizować poniższe zdjęcie:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "aeaed6eb",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread(\"img/messi5.jpg\")\n",
|
||||||
|
"plt.figure(figsize=[7,7])\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "00a42dd5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Zdefinujemy poniżej połączenia pomiędzy 15 punktami orientacyjnymi:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "894acae5",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"pose_points_n = 15\n",
|
||||||
|
"pose_pairs = [[0,1], [1,2], [2,3], [3,4], [1,5], [5,6], [6,7], [1,14], [14,8], [8,9], [9,10], [14,11], [11,12], [12,13]]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5a8a5028",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W wyniku otrzymujemy mapy prawodpodobieństwa występowania danego punktu orientacyjnego:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "24ca95c6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"height, width, _ = image.shape\n",
|
||||||
|
"\n",
|
||||||
|
"image_blob = cv.dnn.blobFromImage(image, 1.0/255, (368, 368), (0, 0, 0), swapRB=False, crop=False)\n",
|
||||||
|
"model.setInput(image_blob)\n",
|
||||||
|
"\n",
|
||||||
|
"output = model.forward()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(20,3))\n",
|
||||||
|
"for i in range(pose_points_n):\n",
|
||||||
|
" prob_map = output[0, i, :, :]\n",
|
||||||
|
" disp_map = cv.resize(prob_map, (width, height), cv.INTER_LINEAR)\n",
|
||||||
|
" plt.subplot(2, 8, i+1)\n",
|
||||||
|
" plt.axis('off')\n",
|
||||||
|
" plt.imshow(disp_map, cmap='jet', vmin=0, vmax=1)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "c8be6dc1",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przeskalowujemy wyniki do rozmiarów obrazu wejściowego i przy pomocy [`cv.minMaxLoc()`](https://docs.opencv.org/4.5.3/d2/de8/group__core__array.html#gab473bf2eb6d14ff97e89b355dac20707) znajdujemy wartość maksymalną (dodatkowo sprawdzamy czy wartość prawdopodobieństwa jest odpowiednio duża):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "a3163987",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"scale_x = width / output.shape[3]\n",
|
||||||
|
"scale_y = height / output.shape[2]\n",
|
||||||
|
"\n",
|
||||||
|
"points = []\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(pose_points_n):\n",
|
||||||
|
" prob_map = output[0, i, :, :]\n",
|
||||||
|
" \n",
|
||||||
|
" _, prob, _, point = cv.minMaxLoc(prob_map)\n",
|
||||||
|
" \n",
|
||||||
|
" x = scale_x * point[0]\n",
|
||||||
|
" y = scale_y * point[1]\n",
|
||||||
|
"\n",
|
||||||
|
" if prob > 0.1: # thr.\n",
|
||||||
|
" points.append((int(x), int(y)))\n",
|
||||||
|
" else:\n",
|
||||||
|
" points.append(None)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f1f8cac3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Możemy teraz nanieść punkty na obraz i połączyć je w szkielet"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "fcbda6c6",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image_points = image.copy()\n",
|
||||||
|
"image_skeleton = image.copy()\n",
|
||||||
|
"\n",
|
||||||
|
"for i, p in enumerate(points):\n",
|
||||||
|
" cv.circle(image_points, p, 8, (255, 255, 0), thickness=-1, lineType=cv.FILLED)\n",
|
||||||
|
" cv.putText(image_points, \"{}\".format(i), p, cv.FONT_HERSHEY_SIMPLEX, 1, (0,255,255), 2, lineType=cv.LINE_AA)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"for pair in pose_pairs:\n",
|
||||||
|
" part_a = pair[0]\n",
|
||||||
|
" part_b = pair[1]\n",
|
||||||
|
"\n",
|
||||||
|
" if points[part_a] and points[part_b]:\n",
|
||||||
|
" cv.line(image_skeleton, points[part_a], points[part_b], (0, 255, 255), 4)\n",
|
||||||
|
" cv.circle(image_skeleton, points[part_a], 7, (255, 255, 0), thickness=-1, lineType=cv.FILLED)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(20,20))\n",
|
||||||
|
"plt.subplot(121)\n",
|
||||||
|
"plt.imshow(image_points[:,:,::-1])\n",
|
||||||
|
"plt.subplot(122)\n",
|
||||||
|
"plt.imshow(image_skeleton[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ea3421fd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Wykrywanie i rozpoznawanie tekstu\n",
|
||||||
|
"\n",
|
||||||
|
"W kolejnym przykładzie zobaczymy jak możemy wykryć na zdjęciu tekst przy pomocy [DB](https://github.com/MhLiao/DB) oraz rozpoznać go przy pomocy [CRNN](https://arxiv.org/pdf/1507.05717.pdf)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "5ef81ed2",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import gdown\n",
|
||||||
|
"\n",
|
||||||
|
"for url, output in [('https://drive.google.com/uc?export=dowload&id=19YWhArrNccaoSza0CfkXlA8im4-lAGsR', 'dnn/DB_TD500_resnet50.onnx'), \n",
|
||||||
|
" ('https://drive.google.com/uc?export=dowload&id=12diBsVJrS9ZEl6BNUiRp9s0xPALBS7kt', 'dnn/crnn_cs.onnx'),\n",
|
||||||
|
" ('https://drive.google.com/uc?export=dowload&id=1oKXxXKusquimp7XY1mFvj9nwLzldVgBR', 'dnn/alphabet_94.txt')]:\n",
|
||||||
|
" gdown.download(url, output, quiet=False)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "72721bc5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Będziemy pracować na poniższym zdjęciu:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "86e3f889",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread('img/road-sign.jpg')\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,7))\n",
|
||||||
|
"plt.imshow(image[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "ec7d3ce4",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Wczytujemy obsługiwany alfabet:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "5d27f129",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"with open('dnn/alphabet_94.txt', 'r') as f_fd:\n",
|
||||||
|
" alphabet = f_fd.read().splitlines()\n",
|
||||||
|
" \n",
|
||||||
|
"print(len(alphabet), alphabet[:15])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d3373c60",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"OpenCV posiada gotowe API dla sieci DB poprzez [`cv.dnn_TextDetectionModel_DB()`](https://docs.opencv.org/4.5.3/db/d0f/classcv_1_1dnn_1_1TextDetectionModel__DB.html):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "b3c3bfc2",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"text_detector = cv.dnn_TextDetectionModel_DB(\"dnn/DB_TD500_resnet50.onnx\")\n",
|
||||||
|
"\n",
|
||||||
|
"text_detector.setBinaryThreshold(0.4).setPolygonThreshold(0.5)\n",
|
||||||
|
"text_detector.setInputParams(scale=1.0/255, size=(640, 640), \n",
|
||||||
|
" mean=(122.67891434, 116.66876762, 104.00698793), swapRB=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "31300a5f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W wyniku otrzymujemy ramki, na których występuje tekst (choć jak widzimy, są też wyniki fałszywie pozytywne):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d14502d4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"boxes, confs = text_detector.detect(image)\n",
|
||||||
|
"\n",
|
||||||
|
"image_out = image.copy()\n",
|
||||||
|
"\n",
|
||||||
|
"cv.polylines(image_out, boxes, True, (255, 0, 255), 4)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,7))\n",
|
||||||
|
"plt.imshow(image_out[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3c3eae71",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"W kolejnym kroku przygotowujemy model do rozpoznawania tekstu przy pomocy [`cv.dnn_TextRecognitionModel()`](https://docs.opencv.org/4.5.3/de/dee/classcv_1_1dnn_1_1TextRecognitionModel.html):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d6b29f6a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"text_recognizer = cv.dnn_TextRecognitionModel(\"dnn/crnn_cs.onnx\")\n",
|
||||||
|
"text_recognizer.setDecodeType(\"CTC-greedy\")\n",
|
||||||
|
"text_recognizer.setVocabulary(alphabet)\n",
|
||||||
|
"text_recognizer.setInputParams(scale=1/127.5, size=(100, 32), mean=(127.5, 127.5, 127.5), swapRB=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "a17f6437",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Każdą wykrytą ramkę rzutujemy na rozmiar 100x32 i wykrywamy tekst:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "d6909f83",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"for box in boxes:\n",
|
||||||
|
" vertices = np.asarray(box).astype(np.float32)\n",
|
||||||
|
" output_size = (100, 32)\n",
|
||||||
|
" target_vertices = np.array([\n",
|
||||||
|
" [0, output_size[1] - 1],\n",
|
||||||
|
" [0, 0],\n",
|
||||||
|
" [output_size[0] - 1, 0],\n",
|
||||||
|
" [output_size[0] - 1, output_size[1] - 1]],\n",
|
||||||
|
" dtype=\"float32\")\n",
|
||||||
|
" rotation_matrix = cv.getPerspectiveTransform(vertices, target_vertices)\n",
|
||||||
|
" cropped_roi = cv.warpPerspective(image, rotation_matrix, output_size)\n",
|
||||||
|
" \n",
|
||||||
|
" result = text_recognizer.recognize(cropped_roi)\n",
|
||||||
|
" print(result)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e0b4b3c0",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Super rozdzielczość\n",
|
||||||
|
"\n",
|
||||||
|
"Podczas zwiększania rozdzielczości brakujące piksele muszą być w jakiś sposób interpolowane. Przy niewielkich powiększeniach zwykle wystarczą nam tradycyjne metody, jednak jeśli pracujemy z obrazem w niskiej rozdzielczości i chcemy go znacząco powiększyć, to chcielibyśmy również uzyskać wysoką jakość np. poprzez uwzględnienie informacji z otoczenia pikseli. Problematyka ta dotyczy zagadnienia super rozdzielczości (ang. *super-resolution*).\n",
|
||||||
|
"\n",
|
||||||
|
"W [artykule](https://arxiv.org/pdf/1902.06068.pdf) z 2020 r. możemy znaleźć porównanie dostępnych w tamtym czasie modeli (zob. wykres na str. 15); np. możemy zobaczyć, że model [EDSR](https://github.com/Saafke/EDSR_Tensorflow) radzi sobie całkiem nieźle, aczkolwiek kosztem sporego narzutu obliczeniowego (por. również benchmarki [OpenCV](https://github.com/opencv/opencv_contrib/blob/master/modules/dnn_superres/README.md)). Przetestujemy EDSR na powiększeniu 4-krotnym:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "6b9f6be9",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!wget -q --show-progress -O dnn/EDSR_x4.pb https://raw.githubusercontent.com/Saafke/EDSR_Tensorflow/master/models/EDSR_x4.pb"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "92f09e43",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Przy pomocy [`cv.dnn_superres.DnnSuperResImpl_create()`](https://docs.opencv.org/4.5.3/d8/d11/classcv_1_1dnn__superres_1_1DnnSuperResImpl.html) przygotowujemy model:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "368ca179",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"sr = cv.dnn_superres.DnnSuperResImpl_create()\n",
|
||||||
|
"sr.readModel('dnn/EDSR_x4.pb')\n",
|
||||||
|
"sr.setModel('edsr', 4)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9e0169f3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Następnie zwiększy rozdzielczość zadanego obrazu (operacja może zająć trochę czasu):"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "45c89529",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"image = cv.imread('img/parrot.jpg')\n",
|
||||||
|
"\n",
|
||||||
|
"image_EDSR = sr.upsample(image)\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(25,25))\n",
|
||||||
|
"plt.subplot(211)\n",
|
||||||
|
"plt.imshow(image[:,:,::-1])\n",
|
||||||
|
"plt.subplot(212)\n",
|
||||||
|
"plt.imshow(image_EDSR[:,:,::-1]);"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3c157587",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Zadanie 1\n",
|
||||||
|
"\n",
|
||||||
|
"Przy pomocy biblioteki [MediaPipe](https://google.github.io/mediapipe/solutions/selfie_segmentation.html) dokonaj podmień tło w selfie `img/selfie-man.jpg` na `img/selfie-background.jpg` (możesz również odbić obraz w poziomie)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d6116f61",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"![Wynik działania programu](img/selfie-out.png)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"author": "Andrzej Wójtowicz",
|
||||||
|
"email": "andre@amu.edu.pl",
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"lang": "pl",
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.3"
|
||||||
|
},
|
||||||
|
"subtitle": "10. Metody głębokiego uczenia w widzeniu komputerowym [laboratoria]",
|
||||||
|
"title": "Widzenie komputerowe",
|
||||||
|
"year": "2021"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
Loading…
Reference in New Issue
Block a user