1
0
Fork 0
uczenie-maszynowe/lab/CNN_Keras.ipynb

12 KiB

AITech — Uczenie maszynowe — laboratoria

11. Sieci neuronowe (Keras)

Keras to napisany w języku Python interfejs do platformy TensorFlow, służącej do uczenia maszynowego.

Aby z niego korzystać, trzeba zainstalować bibliotekę TensorFlow:

Przykład implementacji sieci neuronowej do rozpoznawania cyfr ze zbioru MNIST, według https://keras.io/examples/vision/mnist_convnet

# Konieczne importy

import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
2023-06-01 10:29:41.492705: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-01 10:29:42.477407: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-06-01 10:29:42.477524: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-06-01 10:29:45.603958: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-06-01 10:29:45.604816: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-06-01 10:29:45.604834: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
# Przygotowanie danych

num_classes = 10
input_shape = (28, 28, 1)

# podział danych na zbiory uczący i testowy
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# skalowanie obrazów do przedziału [0, 1]
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# upewnienie się, że obrazy mają wymiary (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

# konwersja danych kategorycznych na binarne
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
# Stworzenie modelu

model = keras.Sequential(
    [
        keras.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 26, 26, 32)        320       
                                                                 
 max_pooling2d (MaxPooling2D  (None, 13, 13, 32)       0         
 )                                                               
                                                                 
 conv2d_1 (Conv2D)           (None, 11, 11, 64)        18496     
2023-06-01 10:29:49.494604: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2023-06-01 10:29:49.495467: W tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:265] failed call to cuInit: UNKNOWN ERROR (303)
2023-06-01 10:29:49.496113: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ELLIOT): /proc/driver/nvidia/version does not exist
2023-06-01 10:29:49.497742: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 5, 5, 64)         0         
 2D)                                                             
                                                                 
 flatten (Flatten)           (None, 1600)              0         
                                                                 
 dropout (Dropout)           (None, 1600)              0         
                                                                 
 dense (Dense)               (None, 10)                16010     
                                                                 
=================================================================
Total params: 34,826
Trainable params: 34,826
Non-trainable params: 0
_________________________________________________________________
# Uczenie modelu
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=128, epochs=15, validation_split=0.1)
2023-06-01 10:30:24.247916: W tensorflow/tsl/framework/cpu_allocator_impl.cc:82] Allocation of 169344000 exceeds 10% of free system memory.
Epoch 1/15
422/422 [==============================] - 36s 82ms/step - loss: 0.3806 - accuracy: 0.8831 - val_loss: 0.0894 - val_accuracy: 0.9738
Epoch 2/15
422/422 [==============================] - 34s 80ms/step - loss: 0.1174 - accuracy: 0.9644 - val_loss: 0.0611 - val_accuracy: 0.9827
Epoch 3/15
422/422 [==============================] - 63s 149ms/step - loss: 0.0858 - accuracy: 0.9739 - val_loss: 0.0482 - val_accuracy: 0.9870
Epoch 4/15
422/422 [==============================] - 29s 68ms/step - loss: 0.0748 - accuracy: 0.9762 - val_loss: 0.0431 - val_accuracy: 0.9885
Epoch 5/15
422/422 [==============================] - 35s 84ms/step - loss: 0.0644 - accuracy: 0.9804 - val_loss: 0.0391 - val_accuracy: 0.9898
Epoch 6/15
422/422 [==============================] - 32s 75ms/step - loss: 0.0562 - accuracy: 0.9826 - val_loss: 0.0367 - val_accuracy: 0.9908
Epoch 7/15
422/422 [==============================] - 29s 68ms/step - loss: 0.0521 - accuracy: 0.9841 - val_loss: 0.0356 - val_accuracy: 0.9897
Epoch 8/15
422/422 [==============================] - 28s 67ms/step - loss: 0.0484 - accuracy: 0.9840 - val_loss: 0.0334 - val_accuracy: 0.9922
Epoch 9/15
422/422 [==============================] - 29s 69ms/step - loss: 0.0466 - accuracy: 0.9855 - val_loss: 0.0355 - val_accuracy: 0.9908
Epoch 10/15
422/422 [==============================] - 29s 68ms/step - loss: 0.0423 - accuracy: 0.9864 - val_loss: 0.0332 - val_accuracy: 0.9902
Epoch 11/15
422/422 [==============================] - 30s 71ms/step - loss: 0.0413 - accuracy: 0.9868 - val_loss: 0.0315 - val_accuracy: 0.9915
Epoch 12/15
422/422 [==============================] - 29s 68ms/step - loss: 0.0380 - accuracy: 0.9876 - val_loss: 0.0294 - val_accuracy: 0.9913
Epoch 13/15
422/422 [==============================] - 30s 70ms/step - loss: 0.0371 - accuracy: 0.9883 - val_loss: 0.0287 - val_accuracy: 0.9917
Epoch 14/15
422/422 [==============================] - 29s 70ms/step - loss: 0.0342 - accuracy: 0.9886 - val_loss: 0.0380 - val_accuracy: 0.9893
Epoch 15/15
422/422 [==============================] - 29s 68ms/step - loss: 0.0351 - accuracy: 0.9888 - val_loss: 0.0320 - val_accuracy: 0.9912
<keras.callbacks.History at 0x7f50553cc760>
# Ewaluacja modelu

score = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])