74 KiB
# Load the Drive helper and mount
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
path_to_file = "/content/wko"
path_to_output = "/content/drive/MyDrive/wko"
To drive
import shutil
shutil.make_archive(path_to_output, 'zip', path_to_file)
'/content/drive/MyDrive/wko.zip'
To content
import zipfile
with zipfile.ZipFile(path_to_output + ".zip","r") as zip_ref:
zip_ref.extractall(path_to_file)
!git clone https://git.wmi.amu.edu.pl/s444417/wko
Cloning into 'wko'... remote: Enumerating objects: 102, done.[K remote: Counting objects: 100% (102/102), done.[K remote: Compressing objects: 100% (100/100), done.[K remote: Total 102 (delta 2), reused 102 (delta 2)[K Receiving objects: 100% (102/102), 126.67 MiB | 476.00 KiB/s, done. Resolving deltas: 100% (2/2), done. Checking out files: 100% (97/97), done.
W poniższym materiale zobaczymy w jaki sposób korzystać z metod głębokiego uczenia sieci neuronowych w pakiecie OpenCV.
Na początku załadujmy niezbędne biblioteki:
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
OpenCV wspiera wiele bibliotek i modeli sieci neuronowych. Modele trenuje się poza OpenCV - bibliotekę wykorzystuje się tylko do predykcji, aczkolwiek sama w sobie ma całkiem sporo możliwych optymalizacji w porównaniu do źródłowych bibliotek neuronowych, więc predykcja może być tutaj faktycznie szybsza.
Pliki z modelami i danymi pomocniczymi będziemy pobierali z sieci i będziemy je zapisywali w katalogu dnn
:
!mkdir -p dnn
Klasyfikacja obrazów
Spróbujemy wykorzystać sieć do klasyfikacji obrazów wyuczonej na zbiorze ImageNet. Pobierzmy plik zawierający opis 1000 możliwych klas:
!wget -o dnn/classification_classes_ILSVRC2012.txt https://raw.githubusercontent.com/opencv/opencv/master/samples/data/dnn/classification_classes_ILSVRC2012.txt
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 21675 100 21675 0 0 52103 0 --:--:-- --:--:-- --:--:-- 52103
Spójrzmy na pierwsze pięć klas w pliku:
with open('dnn/classification_classes_ILSVRC2012.txt', 'r') as f_fd:
classes = f_fd.read().splitlines()
print(len(classes), classes[:5])
1000 ['tench, Tinca tinca', 'goldfish, Carassius auratus', 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias', 'tiger shark, Galeocerdo cuvieri', 'hammerhead, hammerhead shark']
Do klasyfikacji użyjemy sieci DenseNet. Pobierzemy jedną z mniejszych reimplementacji, która jest hostowana m.in. na Google Drive (musimy doinstalować jeden pakiet):
!pip3 install --user --disable-pip-version-check gdown
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: gdown in /usr/local/lib/python3.8/dist-packages (4.4.0) Requirement already satisfied: filelock in /usr/local/lib/python3.8/dist-packages (from gdown) (3.9.0) Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.8/dist-packages (from gdown) (4.6.3) Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from gdown) (4.64.1) Requirement already satisfied: requests[socks] in /usr/local/lib/python3.8/dist-packages (from gdown) (2.25.1) Requirement already satisfied: six in /usr/local/lib/python3.8/dist-packages (from gdown) (1.15.0) Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from requests[socks]->gdown) (4.0.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests[socks]->gdown) (2022.12.7) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests[socks]->gdown) (2.10) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests[socks]->gdown) (1.24.3) Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.8/dist-packages (from requests[socks]->gdown) (1.7.1)
import gdown
url = 'https://drive.google.com/uc?id=0B7ubpZO7HnlCcHlfNmJkU2VPelE'
output = 'dnn/DenseNet_121.caffemodel'
gdown.download(url, output, quiet=False)
Downloading... From: https://drive.google.com/uc?id=0B7ubpZO7HnlCcHlfNmJkU2VPelE To: /content/dnn/DenseNet_121.caffemodel 100%|██████████| 32.3M/32.3M [00:01<00:00, 29.4MB/s]
'dnn/DenseNet_121.caffemodel'
!wget -o dnn/DenseNet_121.prototxt https://raw.githubusercontent.com/shicai/DenseNet-Caffe/master/DenseNet_121.prototxt
Konkretne biblioteki neuronowe posiadają dedykowane funkcje do ładowania modeli, np. readNetFromCaffe()
lub readNetFromTorch()
, jednak można też użyć ogólnej readNet()
:
!pip install opencv-python==4.5.5.62
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting opencv-python==4.5.5.62 Downloading opencv_python-4.5.5.62-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.4 MB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m60.4/60.4 MB[0m [31m11.2 MB/s[0m eta [36m0:00:00[0m [?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.8/dist-packages (from opencv-python==4.5.5.62) (1.21.6) Installing collected packages: opencv-python Attempting uninstall: opencv-python Found existing installation: opencv-python 4.6.0.66 Uninstalling opencv-python-4.6.0.66: Successfully uninstalled opencv-python-4.6.0.66 Successfully installed opencv-python-4.5.5.62
model = cv.dnn.readNet(model='dnn/DenseNet_121.prototxt', config='dnn/DenseNet_121.caffemodel', framework='Caffe')
[0;31m---------------------------------------------------------------------------[0m [0;31merror[0m Traceback (most recent call last) [0;32m<ipython-input-3-38d16ae567b3>[0m in [0;36m<module>[0;34m[0m [0;32m----> 1[0;31m [0mmodel[0m [0;34m=[0m [0mcv[0m[0;34m.[0m[0mdnn[0m[0;34m.[0m[0mreadNet[0m[0;34m([0m[0mmodel[0m[0;34m=[0m[0;34m'dnn/DenseNet_121.prototxt'[0m[0;34m,[0m [0mconfig[0m[0;34m=[0m[0;34m'dnn/DenseNet_121.caffemodel'[0m[0;34m,[0m [0mframework[0m[0;34m=[0m[0;34m'Caffe'[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m [0m [0;31merror[0m: OpenCV(4.5.5) /io/opencv/modules/dnn/src/caffe/caffe_io.cpp:1162: error: (-2:Unspecified error) FAILED: ReadProtoFromTextFile(param_file, param). Failed to parse NetParameter file: dnn/DenseNet_121.prototxt in function 'ReadNetParamsFromTextFileOrDie'
Spróbujemy sklasyfikować poniższy obraz:
image = cv.imread('img/flamingo.jpg')
plt.figure(figsize=[5,5])
plt.imshow(image[:,:,::-1]);
Aby móc przepuścić obraz przez sieć musimy zmienić jego formę reprezentacji poprzez funkcję blobFromImage()
. Aby uzyskać sensowne dane musimy ustawić parametry dotyczące preprocessingu (informacje o tym są zawarte na stronie modelu):
image_blob = cv.dnn.blobFromImage(image=image, scalefactor=0.017, size=(224, 224), mean=(104, 117, 123),
swapRB=False, crop=False)
Ustawiamy dane wejściowe w naszej sieci i pobieramy obliczone wartości:
model.setInput(image_blob)
outputs = model.forward()[0]
Wyliczamy która klasa jest najbardziej prawdopodobna:
outputs = outputs.reshape(1000, 1)
label_id = np.argmax(outputs)
probs = np.exp(outputs) / np.sum(np.exp(outputs))
Wynik:
plt.imshow(image[:,:,::-1])
plt.title(classes[label_id])
print("{:.2f} %".format(np.max(probs) * 100.0))
[0;31m---------------------------------------------------------------------------[0m [0;31mTypeError[0m Traceback (most recent call last) [0;32m<ipython-input-16-52d75d665f1d>[0m in [0;36m<module>[0;34m[0m [0;32m----> 1[0;31m [0mplt[0m[0;34m.[0m[0mimshow[0m[0;34m([0m[0mimage[0m[0;34m[[0m[0;34m:[0m[0;34m,[0m[0;34m:[0m[0;34m,[0m[0;34m:[0m[0;34m:[0m[0;34m-[0m[0;36m1[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m [0m[1;32m 2[0m [0mplt[0m[0;34m.[0m[0mtitle[0m[0;34m([0m[0mclasses[0m[0;34m[[0m[0mlabel_id[0m[0;34m][0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m [1;32m 3[0m [0mprint[0m[0;34m([0m[0;34m"{:.2f} %"[0m[0;34m.[0m[0mformat[0m[0;34m([0m[0mnp[0m[0;34m.[0m[0mmax[0m[0;34m([0m[0mprobs[0m[0;34m)[0m [0;34m*[0m [0;36m100.0[0m[0;34m)[0m[0;34m)[0m[0;34m[0m[0;34m[0m[0m [0;31mTypeError[0m: 'NoneType' object is not subscriptable
Wykrywanie twarzy
Do wykrywania twarzy użyjemy sieci bazującej na SSD:
!curl -o dnn/res10_300x300_ssd_iter_140000_fp16.caffemodel https://raw.githubusercontent.com/opencv/opencv_3rdparty/dnn_samples_face_detector_20180205_fp16/res10_300x300_ssd_iter_140000_fp16.caffemodel
!curl -o dnn/res10_300x300_ssd_iter_140000_fp16.prototxt https://raw.githubusercontent.com/opencv/opencv/master/samples/dnn/face_detector/deploy.prototxt
Ładujemy model:
model = cv.dnn.readNet(model='dnn/res10_300x300_ssd_iter_140000_fp16.prototxt', config='dnn/res10_300x300_ssd_iter_140000_fp16.caffemodel', framework='Caffe')
Będziemy chcieli wykryć twarze na poniższym obrazie:
image = cv.imread('img/people.jpg')
plt.figure(figsize=[7,7])
plt.imshow(image[:,:,::-1]);
Znajdujemy twarze i oznaczamy je na zdjęciu (za próg przyjęliśmy 0.5; zob. informacje o preprocessingu):
height, width, _ = image.shape
image_blob = cv.dnn.blobFromImage(image, scalefactor=1.0, size=(300, 300), mean=[104, 177, 123],
swapRB=False, crop=False)
model.setInput(image_blob)
detections = model.forward()
image_out = image.copy()
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > 0.5:
box = detections[0, 0, i, 3:7] * np.array([width, height, width, height])
(x1, y1, x2, y2) = box.astype('int')
cv.rectangle(image_out, (x1, y1), (x2, y2), (0, 255, 0), 6)
label = '{:.3f}'.format(confidence)
label_size, base_line = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 3.0, 1)
cv.rectangle(image_out, (x1, y1 - label_size[1]), (x1 + label_size[0], y1 + base_line),
(255, 255, 255), cv.FILLED)
cv.putText(image_out, label, (x1, y1), cv.FONT_HERSHEY_SIMPLEX, 3.0, (0, 0, 0))
plt.figure(figsize=[12,12])
plt.imshow(image_out[:,:,::-1]);
Punkty charakterystyczne twarzy
W OpenCV jest możliwość wykrywania punktów charakterystycznych twarzy (ang. _facial landmarks). Użyjemy zaimplementowanego modelu podczas Google Summer of Code przy użyciu createFacemarkLBF()
:
!curl -o dnn/lbfmodel.yaml https://raw.githubusercontent.com/kurnianggoro/GSOC2017/master/data/lbfmodel.yaml
landmark_detector = cv.face.createFacemarkLBF()
landmark_detector.loadModel('dnn/lbfmodel.yaml')
Ograniczamy nasze poszukiwania do twarzy:
faces = []
for detection in detections[0][0]:
if detection[2] >= 0.5:
left = detection[3] * width
top = detection[4] * height
right = detection[5] * width
bottom = detection[6] * height
face_w = right - left
face_h = bottom - top
face_roi = (left, top, face_w, face_h)
faces.append(face_roi)
faces = np.array(faces).astype(int)
_, landmarks_list = landmark_detector.fit(image, faces)
Model generuje 68 punktów charakterycznych, które możemy zwizualizować:
image_display = image.copy()
landmarks = landmarks_list[0][0].astype(int)
for idx, landmark in enumerate(landmarks):
cv.circle(image_display, landmark, 2, (0,255,255), -1)
cv.putText(image_display, str(idx), landmark, cv.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1,
cv.LINE_AA)
plt.figure(figsize=(10,10))
plt.imshow(image_display[700:1050,500:910,::-1]);
Jeśli nie potrzebujemy numeracji, to możemy użyć prostszego podejścia, tj. funkcji drawFacemarks()
:
image_display = image.copy()
for landmarks_set in landmarks_list:
cv.face.drawFacemarks(image_display, landmarks_set, (0, 255, 0))
plt.figure(figsize=(10,10))
plt.imshow(image_display[500:1050,500:1610,::-1]);
Zadanie 1
W katalogu vid
znajdują się filmy blinking-*.mp4
. Napisz program do wykrywania mrugnięć. Opcjonalnie możesz użyć _eye aspect ratio z tego artykułu lub zaproponować własne rozwiązanie.
from scipy.spatial import distance as dist
from imutils.video import FileVideoStream
from imutils.video import VideoStream
from imutils import face_utils
import numpy as np
import argparse
import imutils
import time
import dlib
import cv2
! pip install opencv-contrib-python-headless==4.2.0.32
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting opencv-contrib-python-headless==4.2.0.32 Using cached opencv_contrib_python_headless-4.2.0.32-cp38-cp38-manylinux1_x86_64.whl (27.7 MB) Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.8/dist-packages (from opencv-contrib-python-headless==4.2.0.32) (1.21.6) Installing collected packages: opencv-contrib-python-headless Successfully installed opencv-contrib-python-headless-4.2.0.32
! yes Y | pip uninstall opencv-python
[33mWARNING: Skipping opencv-python as it is not installed.[0m[33m [0m
! pip install imutils
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: imutils in /usr/local/lib/python3.8/dist-packages (0.5.4)
! pip install dlib
!python3 /content/wko/blink_detection_lab8.py -v /content/wko/vid/blinking-man.mp4 -p /content/wko/shape_predictor_68_face_landmarks.dat
[INFO] loading facial landmark predictor... [INFO] starting video stream thread... BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 Traceback (most recent call last): File "/content/wko/blink_detection_lab8.py", line 136, in <module> cv2.destroyAllWindows() cv2.error: OpenCV(4.2.0) /io/opencv/modules/highgui/src/window.cpp:645: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'
!python3 /content/wko/blink_detection_lab8.py -v /content/wko/vid/blinking-woman1.mp4 -p /content/wko/shape_predictor_68_face_landmarks.dat
[INFO] loading facial landmark predictor... [INFO] starting video stream thread... BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 5 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 6 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 7 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 8 BLINKS COUNT: 9 BLINKS COUNT: 9 BLINKS COUNT: 9 BLINKS COUNT: 9 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 10 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 11 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 BLINKS COUNT: 12 Traceback (most recent call last): File "/content/wko/blink_detection_lab8.py", line 136, in <module> cv2.destroyAllWindows() cv2.error: OpenCV(4.2.0) /io/opencv/modules/highgui/src/window.cpp:645: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'
!python3 /content/wko/blink_detection_lab8.py -v /content/wko/vid/blinking-woman2.mp4 -p /content/wko/shape_predictor_68_face_landmarks.dat
[INFO] loading facial landmark predictor... [INFO] starting video stream thread... BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 0 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 1 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 2 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 3 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 BLINKS COUNT: 4 Traceback (most recent call last): File "/content/wko/blink_detection_lab8.py", line 136, in <module> cv2.destroyAllWindows() cv2.error: OpenCV(4.2.0) /io/opencv/modules/highgui/src/window.cpp:645: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'