model_pred | ||
test_data | ||
training_model | ||
.gitignore | ||
main.py | ||
pred_test.py | ||
README.md | ||
requirements.txt |
projekt_widzenie
Run apllication
pip install -r requirements.txt
streamlit run main.py
- On http://localhost:8501/ you should see the app
Dataset
Mamy łącznie 197784 zdjęć
Linki do datasetów:
- https://www.kaggle.com/datasets/mrgeislinger/asl-rgb-depth-fingerspelling-spelling-it-out
- https://www.kaggle.com/datasets/grassknoted/asl-alphabet
- https://www.kaggle.com/datasets/lexset/synthetic-asl-alphabet
- https://www.kaggle.com/datasets/kuzivakwashe/significant-asl-sign-language-alphabet-dataset
Trening modelu
Do trenowania używano biblioteki Keras
Pierwsze podejście model trenowany od zera (from scratch)
img_height=256
img_width=256
batch_size=128
epochs=30
layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(29,activation='softmax')
Zbiór testowy własny: 22% Accuracy Zbiór testowy mieszany z Kaggle: 80% Accuracy
Drugie podejście model VGG16
img_height=224
img_width=224
batch_size=128
epochs=10
Usunięto 3 wierzchne wartswy i dodano warstwy:
x = layers.Flatten()(vgg_model.output)
x = layers.Dense(len(class_names), activation='softmax')(x)
Zbiór testowy własny: 40% Accuracy Zbiór testowy mieszany z Kaggle: ???% Accuracy