Compare commits

...

65 Commits

Author SHA1 Message Date
71f5a4a19a conda env 2021-05-30 20:41:35 +02:00
c6e97633ef sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 23:28:26 +02:00
c514fce4bc sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 23:25:37 +02:00
fde3364b97 sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 22:28:29 +02:00
cf01ed98de sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 22:26:57 +02:00
cf123e1fa6 sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 22:21:51 +02:00
19be7ca6eb sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 22:14:33 +02:00
3e23841578 sacred
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 22:10:16 +02:00
b0346d0b62 trigger other projects
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 19:16:04 +02:00
b8b98f9f85 trigger other projects 2021-05-20 19:15:23 +02:00
4ed875434b trigger other projects
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 19:03:48 +02:00
eab239b6a1 trigger other projects
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-20 18:45:26 +02:00
67a31c4c43 trigger other projects 2021-05-20 18:44:00 +02:00
1f2d929c2e evaluation branch
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 22:44:59 +02:00
b210a2939f evaluation branch
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 22:43:21 +02:00
e6ed31bed8 epochs parameter 2021-05-17 21:58:56 +02:00
134ac629bb epochs parameter
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 21:57:30 +02:00
4f546942b2 epochs parameter 2021-05-17 21:54:47 +02:00
18c05640d1 epochs parameter
Some checks failed
s434765-training/pipeline/head There was a failure building this commit
2021-05-17 21:52:12 +02:00
d325d66b80 epochs parameter
Some checks failed
s434765-training/pipeline/head There was a failure building this commit
2021-05-17 21:47:31 +02:00
49a5c4884c epochs parameter 2021-05-17 21:44:44 +02:00
9d596524bb epochs parameter
Some checks failed
s434765-training/pipeline/head There was a failure building this commit
2021-05-17 21:43:05 +02:00
acbeaf5638 epochs parameter
Some checks failed
s434765-training/pipeline/head There was a failure building this commit
2021-05-17 21:36:02 +02:00
2286b2b5f2 save model
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 21:28:57 +02:00
992e25e46c save model
Some checks failed
s434765-training/pipeline/head There was a failure building this commit
2021-05-17 21:27:14 +02:00
771592b3d4 save model 2021-05-17 21:27:04 +02:00
b0da1334a6 save model
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 21:10:47 +02:00
51ecf3edbe remove empty lines
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 21:04:57 +02:00
961e10926e remove empty lines 2021-05-17 20:53:03 +02:00
142e30de1f remove empty lines 2021-05-17 20:50:29 +02:00
4419460fbf remove empty lines 2021-05-17 20:49:30 +02:00
109e293132 remove empty lines 2021-05-17 20:47:40 +02:00
a17c54fa2a remove empty lines
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:46:02 +02:00
faae9093cd remove empty lines
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:43:29 +02:00
9af96b100c rmse
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:40:08 +02:00
7a94a4d989 rmse
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:33:33 +02:00
c921ba1ce2 rmse
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:27:28 +02:00
9f36885ad1 rmse
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:09:25 +02:00
3d254d98a6 dev
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 20:04:15 +02:00
s434765
ed16ad4d5c dev data
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 19:53:21 +02:00
f8126f77a3 nan error fix v3
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 19:29:41 +02:00
a41ab2fae1 nan error fix v3 2021-05-17 19:27:39 +02:00
83361bdf43 nan error fix v2
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 19:24:30 +02:00
9feb02a3b8 nan error fix
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 18:56:39 +02:00
58d72f71c8 decrease layers
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-17 18:48:14 +02:00
aa32f7db55 JenkinsfileNeural
All checks were successful
s434765-training/pipeline/head This commit looks good
2021-05-15 19:53:42 +02:00
ba9b25f4c3 neural network predictions
Some checks failed
s434765-training/pipeline/head There was a failure building this commit
2021-04-24 22:23:04 +02:00
s434765
4c0566b5e9 data 2021-04-24 21:18:57 +02:00
bf94b00b8f split fetching data and displaying stats v3 2021-04-14 21:26:09 +02:00
b20728ea42 split fetching data and displaying stats v2 2021-04-14 21:24:07 +02:00
7831500e7a split fetching data and displaying stats 2021-04-14 21:22:48 +02:00
146b96312b docker integration v10 2021-04-14 21:17:50 +02:00
b758ab221d docker integration v9 2021-04-14 21:14:17 +02:00
bee8b8d8ec docker integration v8 2021-04-14 21:13:22 +02:00
d3cfc10455 docker integration v7 2021-04-14 21:07:27 +02:00
3092cfe561 docker integration v6 2021-04-14 21:06:29 +02:00
cd091c6439 docker integration v5 2021-04-14 21:05:47 +02:00
47ba98e49c docker integration v4 2021-04-14 21:02:47 +02:00
dcc4ed6c4d docker integration v3 2021-04-14 21:00:17 +02:00
56900aeff5 docker integration v2 2021-04-14 20:56:19 +02:00
8e6ce98c78 Merge remote-tracking branch 'origin/master' 2021-04-14 20:40:17 +02:00
379beaf2d9 docker integration 2021-04-14 20:40:09 +02:00
2400ef5b89 Docker image 2021-04-14 20:25:53 +02:00
17218cd3cb Merge remote-tracking branch 'origin/stats'
# Conflicts:
#	Jenkinsfile
2021-04-14 16:20:27 +02:00
s434765
1a13fd2e8d pipeline fix 2021-03-27 22:57:58 +01:00
39 changed files with 3883 additions and 20 deletions

19
Dockerfile Normal file
View File

@ -0,0 +1,19 @@
FROM ubuntu:latest
RUN apt clean
RUN apt update
RUN apt install -y python3
RUN apt install -y python3-pip
RUN apt install -y unzip
RUN pip3 install pandas
RUN pip3 install kaggle
RUN pip3 install tensorflow
RUN pip3 install sklearn
RUN pip3 install pymongo
RUN pip3 install sacred
RUN pip3 install GitPython
COPY ./data_train ./
COPY ./data_dev ./
COPY ./neural_network.sh ./
COPY ./neural_network.py ./
RUN mkdir /.kaggle
RUN chmod -R 777 /.kaggle

45
Jenkinsfile vendored Normal file
View File

@ -0,0 +1,45 @@
node {
stage('Preparation') {
properties([
parameters([
string(defaultValue: 'karopa',
description: 'Kaggle username',
name: 'KAGGLE_USERNAME',
trim: false),
password(defaultValue: '',
description: 'Kaggle token',
name: 'KAGGLE_KEY'),
string(defaultValue: '5000',
description: 'Data cutoff',
name: 'CUTOFF',
trim: false)
])
]
)
}
stage('Clone repo') {
checkout scm
def testImage = docker.build("karopa/ium:02")
testImage.inside {
withEnv(["KAGGLE_USERNAME=${params.KAGGLE_USERNAME}", "KAGGLE_KEY=${params.KAGGLE_KEY}"]){
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'https://git.wmi.amu.edu.pl/s434765/ium_434765']]])
sh '''
#!/usr/bin/env bash
chmod 777 get_data.sh
./get_data.sh $CUTOFF | tee output.txt
'''
archiveArtifacts "data_dev"
archiveArtifacts "data_shuf"
archiveArtifacts "data_test"
archiveArtifacts "data_train"
archiveArtifacts "output.txt"
}
}
}
stage ("build training") {
build 's434765-training/master/'
}
}

50
JenkinsfileNeural Normal file
View File

@ -0,0 +1,50 @@
node {
stage('Preparation') {
properties([
copyArtifactPermission('s434765-training'),
parameters([
buildSelector(defaultSelector: lastSuccessful(),
description: 'Which build to use for copying artifacts',
name: 'BUILD_SELECTOR'),
string(defaultValue: '30',
description: 'Amount of epochs',
name: 'EPOCHS',
trim: false)
])
]
)
}
stage('Clone repo') {
/*try {*/ docker.image("karopa/ium:27").inside {
stage('Test') {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'https://git.wmi.amu.edu.pl/s434765/ium_434765']]])
copyArtifacts fingerprintArtifacts: true, projectName: 's434765-create-dataset', selector: buildParameter("BUILD_SELECTOR")
sh '''
#!/usr/bin/env bash
chmod 777 neural_network.sh
./neural_network.sh $EPOCHS | tee output.txt
'''
archiveArtifacts 'output.txt'
archiveArtifacts 'model/**/*.*'
archiveArtifacts 'my_runs/**/*.*'
}
/* emailext body: 'Successful build',
subject: "s434765",
to: "26ab8f35.uam.onmicrosoft.com@emea.teams.ms"
}
}
catch (e) {
emailext body: 'Failed build',
subject: "s434765",
to: "26ab8f35.uam.onmicrosoft.com@emea.teams.ms"
throw e*/
}
}
/* stage ("build evaluation") {
build 's434765-evaluation/evaluation/'
}*/
}

View File

@ -12,13 +12,17 @@ node {
} }
stage('Clone repo') { stage('Clone repo') {
checkout([$class: 'GitSCM', branches: [[name: '*/stats']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'https://git.wmi.amu.edu.pl/s434765/ium_434765']]]) docker.image("karopa/ium:03").inside {
copyArtifacts filter: 'data_shuf', fingerprintArtifacts: true, projectName: 's434765-create-dataset', selector: buildParameter("BUILD_SELECTOR") stage('Test') {
sh ''' checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'https://git.wmi.amu.edu.pl/s434765/ium_434765']]])
#!/usr/bin/env bash copyArtifacts fingerprintArtifacts: true, projectName: 's434765-create-dataset', selector: buildParameter("BUILD_SELECTOR")
chmod 777 get_stats_simple.sh sh '''
./get_stats_simple.sh | tee output.txt #!/usr/bin/env bash
''' chmod 777 get_stats.sh
archiveArtifacts 'output.txt' ./get_stats.sh | tee output.txt
} '''
archiveArtifacts 'output.txt'
}
}
}
} }

544
data_dev Normal file

File diff suppressed because one or more lines are too long

544
data_test Normal file

File diff suppressed because one or more lines are too long

1088
data_train Normal file

File diff suppressed because one or more lines are too long

145
environment.yml Normal file
View File

@ -0,0 +1,145 @@
name: myenv
channels:
- defaults
dependencies:
- _tflow_select=2.3.0=eigen
- absl-py=0.12.0=py38haa95532_0
- aiohttp=3.7.4=py38h2bbff1b_1
- astor=0.8.1=py38haa95532_0
- astunparse=1.6.3=py_0
- async-timeout=3.0.1=py38haa95532_0
- attrs=21.2.0=pyhd3eb1b0_0
- blas=1.0=mkl
- blinker=1.4=py38haa95532_0
- brotlipy=0.7.0=py38h2bbff1b_1003
- ca-certificates=2021.5.25=haa95532_1
- cachetools=4.2.2=pyhd3eb1b0_0
- certifi=2020.12.5=py38haa95532_0
- cffi=1.14.5=py38hcd4344a_0
- click=8.0.1=pyhd3eb1b0_0
- coverage=5.5=py38h2bbff1b_2
- cryptography=2.9.2=py38h7a1dbc1_0
- cycler=0.10.0=py38_0
- cython=0.29.23=py38hd77b12b_0
- freetype=2.10.4=hd328e21_0
- gast=0.4.0=py_0
- google-auth=1.30.1=pyhd3eb1b0_0
- google-auth-oauthlib=0.4.1=py_2
- google-pasta=0.2.0=py_0
- grpcio=1.36.1=py38hc60d5dd_1
- h5py=2.10.0=py38h5e291fa_0
- hdf5=1.10.4=h7ebc959_0
- icc_rt=2019.0.0=h0cc432a_1
- icu=58.2=ha925a31_3
- idna=2.10=pyhd3eb1b0_0
- importlib-metadata=3.10.0=py38haa95532_0
- intel-openmp=2021.2.0=haa95532_616
- jpeg=9b=hb83a4c4_2
- keras-applications=1.0.8=py_1
- keras-preprocessing=1.1.2=pyhd3eb1b0_0
- kiwisolver=1.3.1=py38hd77b12b_0
- libpng=1.6.37=h2a8f88b_0
- libprotobuf=3.14.0=h23ce68f_0
- libtiff=4.2.0=hd0e1b90_0
- lz4-c=1.9.3=h2bbff1b_0
- markdown=3.3.4=py38haa95532_0
- matplotlib=3.3.4=py38haa95532_0
- matplotlib-base=3.3.4=py38h49ac443_0
- mkl=2021.2.0=haa95532_296
- mkl-service=2.3.0=py38h2bbff1b_1
- mkl_fft=1.3.0=py38h277e83a_2
- mkl_random=1.2.1=py38hf11a4ad_2
- multidict=5.1.0=py38h2bbff1b_2
- numpy=1.20.2=py38ha4e8547_0
- numpy-base=1.20.2=py38hc2deb75_0
- oauthlib=3.1.0=py_0
- olefile=0.46=py_0
- openssl=1.1.1k=h2bbff1b_0
- opt_einsum=3.3.0=pyhd3eb1b0_1
- pandas=1.2.4=py38hd77b12b_0
- pillow=8.2.0=py38h4fa10fc_0
- pip=21.1.1=py38haa95532_0
- protobuf=3.14.0=py38hd77b12b_1
- pyasn1=0.4.8=py_0
- pyasn1-modules=0.2.8=py_0
- pycparser=2.20=py_2
- pyjwt=2.1.0=py38haa95532_0
- pyopenssl=20.0.1=pyhd3eb1b0_1
- pyparsing=2.4.7=pyhd3eb1b0_0
- pyqt=5.9.2=py38ha925a31_4
- pyreadline=2.1=py38_1
- pysocks=1.7.1=py38haa95532_0
- python=3.8.10=hdbf39b2_7
- python-dateutil=2.8.1=pyhd3eb1b0_0
- pytz=2021.1=pyhd3eb1b0_0
- qt=5.9.7=vc14h73c81de_0
- requests=2.25.1=pyhd3eb1b0_0
- requests-oauthlib=1.3.0=py_0
- rsa=4.7.2=pyhd3eb1b0_1
- scipy=1.6.2=py38h66253e8_1
- setuptools=52.0.0=py38haa95532_0
- sip=4.19.13=py38ha925a31_0
- six=1.15.0=py38haa95532_0
- sqlite=3.35.4=h2bbff1b_0
- tensorboard=2.5.0=py_0
- tensorboard-plugin-wit=1.6.0=py_0
- tensorflow=2.3.0=mkl_py38h8c0d9a2_0
- tensorflow-base=2.3.0=eigen_py38h75a453f_0
- tensorflow-estimator=2.5.0=pyh7b7c402_0
- termcolor=1.1.0=py38haa95532_1
- tk=8.6.10=he774522_0
- tornado=6.1=py38h2bbff1b_0
- typing-extensions=3.7.4.3=hd3eb1b0_0
- typing_extensions=3.7.4.3=pyh06a4308_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- wheel=0.36.2=pyhd3eb1b0_0
- win_inet_pton=1.1.0=py38haa95532_0
- wincertstore=0.2=py38_0
- wrapt=1.12.1=py38he774522_1
- xz=5.2.5=h62dcd97_0
- yarl=1.6.3=py38h2bbff1b_0
- zipp=3.4.1=pyhd3eb1b0_0
- zlib=1.2.11=h62dcd97_4
- zstd=1.4.9=h19a0ad4_0
- pip:
- alembic==1.4.1
- chardet==4.0.0
- cloudpickle==1.6.0
- colorama==0.4.4
- databricks-cli==0.14.3
- docker==5.0.0
- entrypoints==0.3
- flask==2.0.1
- gitdb==4.0.7
- gitpython==3.1.17
- greenlet==1.1.0
- itsdangerous==2.0.1
- jinja2==3.0.1
- joblib==1.0.1
- kaggle==1.5.12
- mako==1.1.4
- markupsafe==2.0.1
- mlflow==1.17.0
- prometheus-client==0.10.1
- prometheus-flask-exporter==0.18.2
- python-editor==1.0.4
- python-slugify==5.0.2
- pywin32==227
- pyyaml==5.4.1
- querystring-parser==1.2.4
- scikit-learn==0.24.2
- sklearn==0.0
- smmap==4.0.0
- sqlalchemy==1.4.17
- sqlparse==0.4.1
- tabulate==0.8.9
- tensorboard-data-server==0.6.1
- text-unidecode==1.3
- threadpoolctl==2.1.0
- tqdm==4.61.0
- urllib3==1.26.5
- waitress==2.0.0
- websocket-client==1.0.1
- werkzeug==2.0.1
prefix: C:\Users\karol\anaconda3\envs\myenv

BIN
evaluation.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

16
get_data.sh Normal file → Executable file
View File

@ -1,20 +1,14 @@
#!/bin/bash #!/bin/bash
rm USvideos_modified.csv
if kaggle datasets download -d sgonkaggle/youtube-trend-with-subscriber && unzip youtube-trend-with-subscriber.zip; then if kaggle datasets download -d sgonkaggle/youtube-trend-with-subscriber && unzip youtube-trend-with-subscriber.zip; then
head -n 2 USvideos_modified.csv
grep -v -e "^$" - USvideos_modified.csv grep -v -e "^$" - USvideos_modified.csv
COUNT=$(wc -l "USvideos_modified.csv") COUNT=$(wc -l "USvideos_modified.csv")
echo "${COUNT}" echo "${COUNT}"
head -n -1 "USvideos_modified.csv" | shuf > "data_shuf" head -n -1 "USvideos_modified.csv" | shuf > "data_shuf"
head -n 544 "data_shuf" > "data_test" head -n 544 "data_shuf" > "data_test"
head -n 1088 "data_shuf" | tail -n 544 > "data_dev" head -n 1088 "data_shuf" | tail -n 544 > "data_dev"
head -n +1089 "data_shuf" > "data_train" head -n 5441 "data_shuf" | tail -n 4352 > "data_train"
echo "Shuffled dataset" tr '\n' '' < "data_dev"
wc -l "data_shuf" sed '/^$/d' "data_dev"
echo "Test dataset" python3 get_data.py USvideos_modified.csv
wc -l "data_test"
echo "Dev dataset"
wc -l "data_dev"
echo "Train dataset"
wc -l "data_train"
python main.py USvideos_modified.csv
fi fi

9
get_stats.sh Normal file
View File

@ -0,0 +1,9 @@
#!/bin/bash
echo "Shuffled dataset"
wc -l "data_shuf"
echo "Test dataset"
wc -l "data_test"
echo "Dev dataset"
wc -l "data_dev"
echo "Train dataset"
wc -l "data_train"

9
model/keras_metadata.pb Normal file

File diff suppressed because one or more lines are too long

BIN
model/saved_model.pb Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

4
my_runs/1/config.json Normal file
View File

@ -0,0 +1,4 @@
{
"epochs_amount": 30,
"seed": 511320143
}

79
my_runs/1/cout.txt Normal file
View File

@ -0,0 +1,79 @@
views 0
dtype: int32
views 488
dtype: int32
likes 1
dtype: int32
likes 3345
dtype: int32
Epoch 1/30
1/19 [>.............................] - ETA: 6s - loss: 0.0834 - mean_absolute_error: 0.0834 19/19 [==============================] - 1s 10ms/step - loss: 0.0679 - mean_absolute_error: 0.0679 - val_loss: 0.0670 - val_mean_absolute_error: 0.0670
Epoch 2/30
1/19 [>.............................] - ETA: 0s - loss: 0.1142 - mean_absolute_error: 0.1142 19/19 [==============================] - 0s 2ms/step - loss: 0.0657 - mean_absolute_error: 0.0657 - val_loss: 0.0530 - val_mean_absolute_error: 0.0530
Epoch 3/30
1/19 [>.............................] - ETA: 0s - loss: 0.0940 - mean_absolute_error: 0.0940 19/19 [==============================] - 0s 2ms/step - loss: 0.0608 - mean_absolute_error: 0.0608 - val_loss: 0.0600 - val_mean_absolute_error: 0.0600
Epoch 4/30
1/19 [>.............................] - ETA: 0s - loss: 0.0524 - mean_absolute_error: 0.0524 19/19 [==============================] - 0s 2ms/step - loss: 0.0521 - mean_absolute_error: 0.0521 - val_loss: 0.0541 - val_mean_absolute_error: 0.0541
Epoch 5/30
1/19 [>.............................] - ETA: 0s - loss: 0.0440 - mean_absolute_error: 0.0440 19/19 [==============================] - 0s 2ms/step - loss: 0.0518 - mean_absolute_error: 0.0518 - val_loss: 0.0541 - val_mean_absolute_error: 0.0541
Epoch 6/30
1/19 [>.............................] - ETA: 0s - loss: 0.0576 - mean_absolute_error: 0.0576 19/19 [==============================] - 0s 2ms/step - loss: 0.0579 - mean_absolute_error: 0.0579 - val_loss: 0.0523 - val_mean_absolute_error: 0.0523
Epoch 7/30
1/19 [>.............................] - ETA: 0s - loss: 0.0310 - mean_absolute_error: 0.0310 19/19 [==============================] - 0s 2ms/step - loss: 0.0497 - mean_absolute_error: 0.0497 - val_loss: 0.0523 - val_mean_absolute_error: 0.0523
Epoch 8/30
1/19 [>.............................] - ETA: 0s - loss: 0.0628 - mean_absolute_error: 0.0628 19/19 [==============================] - 0s 2ms/step - loss: 0.0531 - mean_absolute_error: 0.0531 - val_loss: 0.0551 - val_mean_absolute_error: 0.0551
Epoch 9/30
1/19 [>.............................] - ETA: 0s - loss: 0.0425 - mean_absolute_error: 0.0425 19/19 [==============================] - 0s 2ms/step - loss: 0.0543 - mean_absolute_error: 0.0543 - val_loss: 0.0527 - val_mean_absolute_error: 0.0527
Epoch 10/30
1/19 [>.............................] - ETA: 0s - loss: 0.0560 - mean_absolute_error: 0.0560 19/19 [==============================] - 0s 2ms/step - loss: 0.0549 - mean_absolute_error: 0.0549 - val_loss: 0.0525 - val_mean_absolute_error: 0.0525
Epoch 11/30
1/19 [>.............................] - ETA: 0s - loss: 0.0391 - mean_absolute_error: 0.0391 19/19 [==============================] - 0s 2ms/step - loss: 0.0520 - mean_absolute_error: 0.0520 - val_loss: 0.0556 - val_mean_absolute_error: 0.0556
Epoch 12/30
1/19 [>.............................] - ETA: 0s - loss: 0.0417 - mean_absolute_error: 0.0417 19/19 [==============================] - 0s 2ms/step - loss: 0.0578 - mean_absolute_error: 0.0578 - val_loss: 0.0522 - val_mean_absolute_error: 0.0522
Epoch 13/30
1/19 [>.............................] - ETA: 0s - loss: 0.0834 - mean_absolute_error: 0.0834 19/19 [==============================] - 0s 2ms/step - loss: 0.0605 - mean_absolute_error: 0.0605 - val_loss: 0.0532 - val_mean_absolute_error: 0.0532
Epoch 14/30
1/19 [>.............................] - ETA: 0s - loss: 0.0430 - mean_absolute_error: 0.0430 19/19 [==============================] - 0s 2ms/step - loss: 0.0582 - mean_absolute_error: 0.0582 - val_loss: 0.0526 - val_mean_absolute_error: 0.0526
Epoch 15/30
1/19 [>.............................] - ETA: 0s - loss: 0.0506 - mean_absolute_error: 0.0506 19/19 [==============================] - 0s 2ms/step - loss: 0.0512 - mean_absolute_error: 0.0512 - val_loss: 0.0533 - val_mean_absolute_error: 0.0533
Epoch 16/30
1/19 [>.............................] - ETA: 0s - loss: 0.0402 - mean_absolute_error: 0.0402 19/19 [==============================] - 0s 2ms/step - loss: 0.0514 - mean_absolute_error: 0.0514 - val_loss: 0.0537 - val_mean_absolute_error: 0.0537
Epoch 17/30
1/19 [>.............................] - ETA: 0s - loss: 0.0247 - mean_absolute_error: 0.0247 19/19 [==============================] - 0s 2ms/step - loss: 0.0463 - mean_absolute_error: 0.0463 - val_loss: 0.0519 - val_mean_absolute_error: 0.0519
Epoch 18/30
1/19 [>.............................] - ETA: 0s - loss: 0.0401 - mean_absolute_error: 0.0401 19/19 [==============================] - 0s 2ms/step - loss: 0.0537 - mean_absolute_error: 0.0537 - val_loss: 0.0568 - val_mean_absolute_error: 0.0568
Epoch 19/30
1/19 [>.............................] - ETA: 0s - loss: 0.0930 - mean_absolute_error: 0.0930 19/19 [==============================] - 0s 2ms/step - loss: 0.0534 - mean_absolute_error: 0.0534 - val_loss: 0.0523 - val_mean_absolute_error: 0.0523
Epoch 20/30
1/19 [>.............................] - ETA: 0s - loss: 0.0631 - mean_absolute_error: 0.0631 19/19 [==============================] - 0s 2ms/step - loss: 0.0577 - mean_absolute_error: 0.0577 - val_loss: 0.0532 - val_mean_absolute_error: 0.0532
Epoch 21/30
1/19 [>.............................] - ETA: 0s - loss: 0.0524 - mean_absolute_error: 0.0524 19/19 [==============================] - 0s 2ms/step - loss: 0.0538 - mean_absolute_error: 0.0538 - val_loss: 0.0540 - val_mean_absolute_error: 0.0540
Epoch 22/30
1/19 [>.............................] - ETA: 0s - loss: 0.0435 - mean_absolute_error: 0.0435 19/19 [==============================] - 0s 2ms/step - loss: 0.0510 - mean_absolute_error: 0.0510 - val_loss: 0.0594 - val_mean_absolute_error: 0.0594
Epoch 23/30
1/19 [>.............................] - ETA: 0s - loss: 0.0324 - mean_absolute_error: 0.0324 19/19 [==============================] - 0s 2ms/step - loss: 0.0573 - mean_absolute_error: 0.0573 - val_loss: 0.0537 - val_mean_absolute_error: 0.0537
Epoch 24/30
1/19 [>.............................] - ETA: 0s - loss: 0.0354 - mean_absolute_error: 0.0354 19/19 [==============================] - 0s 2ms/step - loss: 0.0510 - mean_absolute_error: 0.0510 - val_loss: 0.0546 - val_mean_absolute_error: 0.0546
Epoch 25/30
1/19 [>.............................] - ETA: 0s - loss: 0.0474 - mean_absolute_error: 0.0474 19/19 [==============================] - 0s 2ms/step - loss: 0.0539 - mean_absolute_error: 0.0539 - val_loss: 0.0525 - val_mean_absolute_error: 0.0525
Epoch 26/30
1/19 [>.............................] - ETA: 0s - loss: 0.0928 - mean_absolute_error: 0.0928 19/19 [==============================] - 0s 2ms/step - loss: 0.0612 - mean_absolute_error: 0.0612 - val_loss: 0.0540 - val_mean_absolute_error: 0.0540
Epoch 27/30
1/19 [>.............................] - ETA: 0s - loss: 0.0582 - mean_absolute_error: 0.0582 19/19 [==============================] - 0s 2ms/step - loss: 0.0535 - mean_absolute_error: 0.0535 - val_loss: 0.0548 - val_mean_absolute_error: 0.0548
Epoch 28/30
1/19 [>.............................] - ETA: 0s - loss: 0.0415 - mean_absolute_error: 0.0415 19/19 [==============================] - 0s 2ms/step - loss: 0.0511 - mean_absolute_error: 0.0511 - val_loss: 0.0533 - val_mean_absolute_error: 0.0533
Epoch 29/30
1/19 [>.............................] - ETA: 0s - loss: 0.0491 - mean_absolute_error: 0.0491 19/19 [==============================] - 0s 3ms/step - loss: 0.0532 - mean_absolute_error: 0.0532 - val_loss: 0.0528 - val_mean_absolute_error: 0.0528
Epoch 30/30
1/19 [>.............................] - ETA: 0s - loss: 0.0475 - mean_absolute_error: 0.0475 19/19 [==============================] - 0s 2ms/step - loss: 0.0529 - mean_absolute_error: 0.0529 - val_loss: 0.0529 - val_mean_absolute_error: 0.0529
views 1
dtype: int32
views 488
dtype: int32
likes 1
dtype: int32
likes 3345
dtype: int32
114882.99377127373
114882.99377127373
114882.99377127373

3
my_runs/1/info.json Normal file
View File

@ -0,0 +1,3 @@
{
"prepare_message_ts": "2021-05-20 21:59:18.264490"
}

1
my_runs/1/metrics.json Normal file
View File

@ -0,0 +1 @@
{}

87
my_runs/1/run.json Normal file
View File

@ -0,0 +1,87 @@
{
"artifacts": [],
"command": "my_main",
"experiment": {
"base_dir": "C:\\Users\\karol\\PycharmProjects\\ium_434765",
"dependencies": [
"numpy==1.19.5",
"pandas==1.2.4",
"sacred==0.8.2",
"tensorflow==2.5.0rc1"
],
"mainfile": "neural_network.py",
"name": "sacred_scopes",
"repositories": [
{
"commit": "07479089e2d0bd86c8b0dd3bb005f7178078cc34",
"dirty": true,
"url": "https://git.wmi.amu.edu.pl/s434765/ium_434765.git"
},
{
"commit": "07479089e2d0bd86c8b0dd3bb005f7178078cc34",
"dirty": true,
"url": "https://git.wmi.amu.edu.pl/s434765/ium_434765.git"
}
],
"sources": [
[
"evaluate_network.py",
"_sources\\evaluate_network_6bc39a6cabbc78720ddbbd5b23f51cc3.py"
],
[
"neural_network.py",
"_sources\\neural_network_cdaa9eab635a60c87899a6eaac9e398e.py"
]
]
},
"heartbeat": "2021-05-20T19:59:22.263859",
"host": {
"ENV": {},
"cpu": "Unknown",
"gpus": {
"driver_version": "452.06",
"gpus": [
{
"model": "GeForce GTX 1650 Ti",
"persistence_mode": false,
"total_memory": 4096
}
]
},
"hostname": "DESKTOP-5PRPHO6",
"os": [
"Windows",
"Windows-10-10.0.19041-SP0"
],
"python_version": "3.9.2"
},
"meta": {
"command": "my_main",
"options": {
"--beat-interval": null,
"--capture": null,
"--comment": null,
"--debug": false,
"--enforce_clean": false,
"--file_storage": null,
"--force": false,
"--help": false,
"--loglevel": null,
"--mongo_db": null,
"--name": null,
"--pdb": false,
"--print-config": false,
"--priority": null,
"--queue": false,
"--s3": null,
"--sql": null,
"--tiny_db": null,
"--unobserved": false
}
},
"resources": [],
"result": null,
"start_time": "2021-05-20T19:59:18.258489",
"status": "COMPLETED",
"stop_time": "2021-05-20T19:59:22.263859"
}

4
my_runs/2/config.json Normal file
View File

@ -0,0 +1,4 @@
{
"epochs_amount": 30,
"seed": 535480662
}

78
my_runs/2/cout.txt Normal file
View File

@ -0,0 +1,78 @@
views 0
dtype: int32
views 488
dtype: int32
likes 1
dtype: int32
likes 3345
dtype: int32
Epoch 1/30
1/19 [>.............................] - ETA: 6s - loss: 0.1168 - mean_absolute_error: 0.1168 19/19 [==============================] - 1s 10ms/step - loss: 0.0788 - mean_absolute_error: 0.0788 - val_loss: 0.0639 - val_mean_absolute_error: 0.0639
Epoch 2/30
1/19 [>.............................] - ETA: 0s - loss: 0.0699 - mean_absolute_error: 0.0699 19/19 [==============================] - 0s 2ms/step - loss: 0.0622 - mean_absolute_error: 0.0622 - val_loss: 0.0589 - val_mean_absolute_error: 0.0589
Epoch 3/30
1/19 [>.............................] - ETA: 0s - loss: 0.0547 - mean_absolute_error: 0.0547 19/19 [==============================] - 0s 2ms/step - loss: 0.0566 - mean_absolute_error: 0.0566 - val_loss: 0.0542 - val_mean_absolute_error: 0.0542
Epoch 4/30
1/19 [>.............................] - ETA: 0s - loss: 0.0351 - mean_absolute_error: 0.0351 19/19 [==============================] - 0s 2ms/step - loss: 0.0534 - mean_absolute_error: 0.0534 - val_loss: 0.0524 - val_mean_absolute_error: 0.0524
Epoch 5/30
1/19 [>.............................] - ETA: 0s - loss: 0.0436 - mean_absolute_error: 0.0436 19/19 [==============================] - 0s 2ms/step - loss: 0.0562 - mean_absolute_error: 0.0562 - val_loss: 0.0560 - val_mean_absolute_error: 0.0560
Epoch 6/30
1/19 [>.............................] - ETA: 0s - loss: 0.0474 - mean_absolute_error: 0.0474 19/19 [==============================] - 0s 2ms/step - loss: 0.0513 - mean_absolute_error: 0.0513 - val_loss: 0.0533 - val_mean_absolute_error: 0.0533
Epoch 7/30
1/19 [>.............................] - ETA: 0s - loss: 0.0714 - mean_absolute_error: 0.0714 19/19 [==============================] - 0s 2ms/step - loss: 0.0562 - mean_absolute_error: 0.0562 - val_loss: 0.0519 - val_mean_absolute_error: 0.0519
Epoch 8/30
1/19 [>.............................] - ETA: 0s - loss: 0.0567 - mean_absolute_error: 0.0567 19/19 [==============================] - 0s 2ms/step - loss: 0.0535 - mean_absolute_error: 0.0535 - val_loss: 0.0526 - val_mean_absolute_error: 0.0526
Epoch 9/30
1/19 [>.............................] - ETA: 0s - loss: 0.0472 - mean_absolute_error: 0.0472 19/19 [==============================] - 0s 2ms/step - loss: 0.0571 - mean_absolute_error: 0.0571 - val_loss: 0.0559 - val_mean_absolute_error: 0.0559
Epoch 10/30
1/19 [>.............................] - ETA: 0s - loss: 0.0634 - mean_absolute_error: 0.0634 19/19 [==============================] - 0s 2ms/step - loss: 0.0528 - mean_absolute_error: 0.0528 - val_loss: 0.0527 - val_mean_absolute_error: 0.0527
Epoch 11/30
1/19 [>.............................] - ETA: 0s - loss: 0.0412 - mean_absolute_error: 0.0412 19/19 [==============================] - 0s 2ms/step - loss: 0.0529 - mean_absolute_error: 0.0529 - val_loss: 0.0542 - val_mean_absolute_error: 0.0542
Epoch 12/30
1/19 [>.............................] - ETA: 0s - loss: 0.0390 - mean_absolute_error: 0.0390 19/19 [==============================] - 0s 2ms/step - loss: 0.0496 - mean_absolute_error: 0.0496 - val_loss: 0.0596 - val_mean_absolute_error: 0.0596
Epoch 13/30
1/19 [>.............................] - ETA: 0s - loss: 0.0625 - mean_absolute_error: 0.0625 19/19 [==============================] - 0s 2ms/step - loss: 0.0545 - mean_absolute_error: 0.0545 - val_loss: 0.0560 - val_mean_absolute_error: 0.0560
Epoch 14/30
1/19 [>.............................] - ETA: 0s - loss: 0.0206 - mean_absolute_error: 0.0206 19/19 [==============================] - 0s 2ms/step - loss: 0.0542 - mean_absolute_error: 0.0542 - val_loss: 0.0542 - val_mean_absolute_error: 0.0542
Epoch 15/30
1/19 [>.............................] - ETA: 0s - loss: 0.0311 - mean_absolute_error: 0.0311 19/19 [==============================] - 0s 3ms/step - loss: 0.0486 - mean_absolute_error: 0.0486 - val_loss: 0.0527 - val_mean_absolute_error: 0.0527
Epoch 16/30
1/19 [>.............................] - ETA: 0s - loss: 0.0270 - mean_absolute_error: 0.0270 19/19 [==============================] - 0s 2ms/step - loss: 0.0477 - mean_absolute_error: 0.0477 - val_loss: 0.0558 - val_mean_absolute_error: 0.0558
Epoch 17/30
1/19 [>.............................] - ETA: 0s - loss: 0.0808 - mean_absolute_error: 0.0808 19/19 [==============================] - 0s 2ms/step - loss: 0.0563 - mean_absolute_error: 0.0563 - val_loss: 0.0546 - val_mean_absolute_error: 0.0546
Epoch 18/30
1/19 [>.............................] - ETA: 0s - loss: 0.0433 - mean_absolute_error: 0.0433 19/19 [==============================] - 0s 2ms/step - loss: 0.0499 - mean_absolute_error: 0.0499 - val_loss: 0.0551 - val_mean_absolute_error: 0.0551
Epoch 19/30
1/19 [>.............................] - ETA: 0s - loss: 0.0431 - mean_absolute_error: 0.0431 19/19 [==============================] - 0s 2ms/step - loss: 0.0524 - mean_absolute_error: 0.0524 - val_loss: 0.0530 - val_mean_absolute_error: 0.0530
Epoch 20/30
1/19 [>.............................] - ETA: 0s - loss: 0.0298 - mean_absolute_error: 0.0298 19/19 [==============================] - 0s 2ms/step - loss: 0.0490 - mean_absolute_error: 0.0490 - val_loss: 0.0517 - val_mean_absolute_error: 0.0517
Epoch 21/30
1/19 [>.............................] - ETA: 0s - loss: 0.0499 - mean_absolute_error: 0.0499 19/19 [==============================] - 0s 2ms/step - loss: 0.0555 - mean_absolute_error: 0.0555 - val_loss: 0.0557 - val_mean_absolute_error: 0.0557
Epoch 22/30
1/19 [>.............................] - ETA: 0s - loss: 0.0401 - mean_absolute_error: 0.0401 19/19 [==============================] - 0s 2ms/step - loss: 0.0524 - mean_absolute_error: 0.0524 - val_loss: 0.0602 - val_mean_absolute_error: 0.0602
Epoch 23/30
1/19 [>.............................] - ETA: 0s - loss: 0.0652 - mean_absolute_error: 0.0652 19/19 [==============================] - 0s 2ms/step - loss: 0.0596 - mean_absolute_error: 0.0596 - val_loss: 0.0567 - val_mean_absolute_error: 0.0567
Epoch 24/30
1/19 [>.............................] - ETA: 0s - loss: 0.0275 - mean_absolute_error: 0.0275 19/19 [==============================] - 0s 2ms/step - loss: 0.0562 - mean_absolute_error: 0.0562 - val_loss: 0.0531 - val_mean_absolute_error: 0.0531
Epoch 25/30
1/19 [>.............................] - ETA: 0s - loss: 0.0602 - mean_absolute_error: 0.0602 19/19 [==============================] - 0s 2ms/step - loss: 0.0576 - mean_absolute_error: 0.0576 - val_loss: 0.0540 - val_mean_absolute_error: 0.0540
Epoch 26/30
1/19 [>.............................] - ETA: 0s - loss: 0.0388 - mean_absolute_error: 0.0388 19/19 [==============================] - 0s 2ms/step - loss: 0.0542 - mean_absolute_error: 0.0542 - val_loss: 0.0555 - val_mean_absolute_error: 0.0555
Epoch 27/30
1/19 [>.............................] - ETA: 0s - loss: 0.0711 - mean_absolute_error: 0.0711 19/19 [==============================] - 0s 2ms/step - loss: 0.0560 - mean_absolute_error: 0.0560 - val_loss: 0.0538 - val_mean_absolute_error: 0.0538
Epoch 28/30
1/19 [>.............................] - ETA: 0s - loss: 0.0875 - mean_absolute_error: 0.0875 19/19 [==============================] - 0s 2ms/step - loss: 0.0614 - mean_absolute_error: 0.0614 - val_loss: 0.0535 - val_mean_absolute_error: 0.0535
Epoch 29/30
1/19 [>.............................] - ETA: 0s - loss: 0.0462 - mean_absolute_error: 0.0462 19/19 [==============================] - 0s 2ms/step - loss: 0.0544 - mean_absolute_error: 0.0544 - val_loss: 0.0562 - val_mean_absolute_error: 0.0562
Epoch 30/30
1/19 [>.............................] - ETA: 0s - loss: 0.0588 - mean_absolute_error: 0.0588 19/19 [==============================] - 0s 2ms/step - loss: 0.0582 - mean_absolute_error: 0.0582 - val_loss: 0.0593 - val_mean_absolute_error: 0.0593
views 1
dtype: int32
views 488
dtype: int32
likes 1
dtype: int32
likes 3345
dtype: int32
129787.96004765884
129787.96004765884

3
my_runs/2/info.json Normal file
View File

@ -0,0 +1,3 @@
{
"prepare_message_ts": "2021-05-20 22:01:49.105722"
}

13
my_runs/2/metrics.json Normal file
View File

@ -0,0 +1,13 @@
{
"training.metrics": {
"steps": [
0
],
"timestamps": [
"2021-05-20T20:01:53.071700"
],
"values": [
129787.96004765884
]
}
}

87
my_runs/2/run.json Normal file
View File

@ -0,0 +1,87 @@
{
"artifacts": [],
"command": "my_main",
"experiment": {
"base_dir": "C:\\Users\\karol\\PycharmProjects\\ium_434765",
"dependencies": [
"numpy==1.19.5",
"pandas==1.2.4",
"sacred==0.8.2",
"tensorflow==2.5.0rc1"
],
"mainfile": "neural_network.py",
"name": "sacred_scopes",
"repositories": [
{
"commit": "07479089e2d0bd86c8b0dd3bb005f7178078cc34",
"dirty": true,
"url": "https://git.wmi.amu.edu.pl/s434765/ium_434765.git"
},
{
"commit": "07479089e2d0bd86c8b0dd3bb005f7178078cc34",
"dirty": true,
"url": "https://git.wmi.amu.edu.pl/s434765/ium_434765.git"
}
],
"sources": [
[
"evaluate_network.py",
"_sources\\evaluate_network_6bc39a6cabbc78720ddbbd5b23f51cc3.py"
],
[
"neural_network.py",
"_sources\\neural_network_eca667942d0304c50d970a67f9012302.py"
]
]
},
"heartbeat": "2021-05-20T20:01:53.071700",
"host": {
"ENV": {},
"cpu": "Unknown",
"gpus": {
"driver_version": "452.06",
"gpus": [
{
"model": "GeForce GTX 1650 Ti",
"persistence_mode": false,
"total_memory": 4096
}
]
},
"hostname": "DESKTOP-5PRPHO6",
"os": [
"Windows",
"Windows-10-10.0.19041-SP0"
],
"python_version": "3.9.2"
},
"meta": {
"command": "my_main",
"options": {
"--beat-interval": null,
"--capture": null,
"--comment": null,
"--debug": false,
"--enforce_clean": false,
"--file_storage": null,
"--force": false,
"--help": false,
"--loglevel": null,
"--mongo_db": null,
"--name": null,
"--pdb": false,
"--print-config": false,
"--priority": null,
"--queue": false,
"--s3": null,
"--sql": null,
"--tiny_db": null,
"--unobserved": false
}
},
"resources": [],
"result": null,
"start_time": "2021-05-20T20:01:49.099728",
"status": "COMPLETED",
"stop_time": "2021-05-20T20:01:53.071700"
}

4
my_runs/3/config.json Normal file
View File

@ -0,0 +1,4 @@
{
"epochs_amount": 30,
"seed": 981983024
}

78
my_runs/3/cout.txt Normal file
View File

@ -0,0 +1,78 @@
views 0
dtype: int32
views 488
dtype: int32
likes 1
dtype: int32
likes 3345
dtype: int32
Epoch 1/30
1/19 [>.............................] - ETA: 7s - loss: 0.1234 - mean_absolute_error: 0.1234 19/19 [==============================] - 1s 10ms/step - loss: 0.0687 - mean_absolute_error: 0.0687 - val_loss: 0.0587 - val_mean_absolute_error: 0.0587
Epoch 2/30
1/19 [>.............................] - ETA: 0s - loss: 0.0764 - mean_absolute_error: 0.0764 19/19 [==============================] - 0s 2ms/step - loss: 0.0583 - mean_absolute_error: 0.0583 - val_loss: 0.0541 - val_mean_absolute_error: 0.0541
Epoch 3/30
1/19 [>.............................] - ETA: 0s - loss: 0.0781 - mean_absolute_error: 0.0781 19/19 [==============================] - 0s 2ms/step - loss: 0.0595 - mean_absolute_error: 0.0595 - val_loss: 0.0572 - val_mean_absolute_error: 0.0572
Epoch 4/30
1/19 [>.............................] - ETA: 0s - loss: 0.0564 - mean_absolute_error: 0.0564 19/19 [==============================] - 0s 2ms/step - loss: 0.0592 - mean_absolute_error: 0.0592 - val_loss: 0.0541 - val_mean_absolute_error: 0.0541
Epoch 5/30
1/19 [>.............................] - ETA: 0s - loss: 0.0608 - mean_absolute_error: 0.0608 19/19 [==============================] - 0s 2ms/step - loss: 0.0552 - mean_absolute_error: 0.0552 - val_loss: 0.0524 - val_mean_absolute_error: 0.0524
Epoch 6/30
1/19 [>.............................] - ETA: 0s - loss: 0.0346 - mean_absolute_error: 0.0346 19/19 [==============================] - 0s 2ms/step - loss: 0.0510 - mean_absolute_error: 0.0510 - val_loss: 0.0544 - val_mean_absolute_error: 0.0544
Epoch 7/30
1/19 [>.............................] - ETA: 0s - loss: 0.0569 - mean_absolute_error: 0.0569 19/19 [==============================] - 0s 2ms/step - loss: 0.0570 - mean_absolute_error: 0.0570 - val_loss: 0.0521 - val_mean_absolute_error: 0.0521
Epoch 8/30
1/19 [>.............................] - ETA: 0s - loss: 0.0565 - mean_absolute_error: 0.0565 19/19 [==============================] - 0s 2ms/step - loss: 0.0558 - mean_absolute_error: 0.0558 - val_loss: 0.0542 - val_mean_absolute_error: 0.0542
Epoch 9/30
1/19 [>.............................] - ETA: 0s - loss: 0.0829 - mean_absolute_error: 0.0829 19/19 [==============================] - 0s 2ms/step - loss: 0.0563 - mean_absolute_error: 0.0563 - val_loss: 0.0535 - val_mean_absolute_error: 0.0535
Epoch 10/30
1/19 [>.............................] - ETA: 0s - loss: 0.0298 - mean_absolute_error: 0.0298 19/19 [==============================] - 0s 2ms/step - loss: 0.0509 - mean_absolute_error: 0.0509 - val_loss: 0.0520 - val_mean_absolute_error: 0.0520
Epoch 11/30
1/19 [>.............................] - ETA: 0s - loss: 0.0376 - mean_absolute_error: 0.0376 19/19 [==============================] - 0s 2ms/step - loss: 0.0546 - mean_absolute_error: 0.0546 - val_loss: 0.0557 - val_mean_absolute_error: 0.0557
Epoch 12/30
1/19 [>.............................] - ETA: 0s - loss: 0.0577 - mean_absolute_error: 0.0577 19/19 [==============================] - 0s 2ms/step - loss: 0.0567 - mean_absolute_error: 0.0567 - val_loss: 0.0521 - val_mean_absolute_error: 0.0521
Epoch 13/30
1/19 [>.............................] - ETA: 0s - loss: 0.0537 - mean_absolute_error: 0.0537 19/19 [==============================] - 0s 2ms/step - loss: 0.0552 - mean_absolute_error: 0.0552 - val_loss: 0.0556 - val_mean_absolute_error: 0.0556
Epoch 14/30
1/19 [>.............................] - ETA: 0s - loss: 0.0696 - mean_absolute_error: 0.0696 19/19 [==============================] - 0s 2ms/step - loss: 0.0616 - mean_absolute_error: 0.0616 - val_loss: 0.0571 - val_mean_absolute_error: 0.0571
Epoch 15/30
1/19 [>.............................] - ETA: 0s - loss: 0.0726 - mean_absolute_error: 0.0726 19/19 [==============================] - 0s 2ms/step - loss: 0.0556 - mean_absolute_error: 0.0556 - val_loss: 0.0531 - val_mean_absolute_error: 0.0531
Epoch 16/30
1/19 [>.............................] - ETA: 0s - loss: 0.0448 - mean_absolute_error: 0.0448 19/19 [==============================] - 0s 2ms/step - loss: 0.0533 - mean_absolute_error: 0.0533 - val_loss: 0.0562 - val_mean_absolute_error: 0.0562
Epoch 17/30
1/19 [>.............................] - ETA: 0s - loss: 0.0458 - mean_absolute_error: 0.0458 19/19 [==============================] - 0s 2ms/step - loss: 0.0553 - mean_absolute_error: 0.0553 - val_loss: 0.0558 - val_mean_absolute_error: 0.0558
Epoch 18/30
1/19 [>.............................] - ETA: 0s - loss: 0.0547 - mean_absolute_error: 0.0547 19/19 [==============================] - 0s 2ms/step - loss: 0.0590 - mean_absolute_error: 0.0590 - val_loss: 0.0561 - val_mean_absolute_error: 0.0561
Epoch 19/30
1/19 [>.............................] - ETA: 0s - loss: 0.0402 - mean_absolute_error: 0.0402 19/19 [==============================] - 0s 2ms/step - loss: 0.0579 - mean_absolute_error: 0.0579 - val_loss: 0.0554 - val_mean_absolute_error: 0.0554
Epoch 20/30
1/19 [>.............................] - ETA: 0s - loss: 0.0614 - mean_absolute_error: 0.0614 19/19 [==============================] - 0s 2ms/step - loss: 0.0558 - mean_absolute_error: 0.0558 - val_loss: 0.0539 - val_mean_absolute_error: 0.0539
Epoch 21/30
1/19 [>.............................] - ETA: 0s - loss: 0.0492 - mean_absolute_error: 0.0492 19/19 [==============================] - 0s 2ms/step - loss: 0.0525 - mean_absolute_error: 0.0525 - val_loss: 0.0540 - val_mean_absolute_error: 0.0540
Epoch 22/30
1/19 [>.............................] - ETA: 0s - loss: 0.0554 - mean_absolute_error: 0.0554 19/19 [==============================] - 0s 2ms/step - loss: 0.0595 - mean_absolute_error: 0.0595 - val_loss: 0.0533 - val_mean_absolute_error: 0.0533
Epoch 23/30
1/19 [>.............................] - ETA: 0s - loss: 0.0664 - mean_absolute_error: 0.0664 19/19 [==============================] - 0s 2ms/step - loss: 0.0533 - mean_absolute_error: 0.0533 - val_loss: 0.0518 - val_mean_absolute_error: 0.0518
Epoch 24/30
1/19 [>.............................] - ETA: 0s - loss: 0.0282 - mean_absolute_error: 0.0282 19/19 [==============================] - 0s 2ms/step - loss: 0.0471 - mean_absolute_error: 0.0471 - val_loss: 0.0517 - val_mean_absolute_error: 0.0517
Epoch 25/30
1/19 [>.............................] - ETA: 0s - loss: 0.0456 - mean_absolute_error: 0.0456 19/19 [==============================] - 0s 2ms/step - loss: 0.0473 - mean_absolute_error: 0.0473 - val_loss: 0.0536 - val_mean_absolute_error: 0.0536
Epoch 26/30
1/19 [>.............................] - ETA: 0s - loss: 0.0668 - mean_absolute_error: 0.0668 19/19 [==============================] - 0s 2ms/step - loss: 0.0571 - mean_absolute_error: 0.0571 - val_loss: 0.0532 - val_mean_absolute_error: 0.0532
Epoch 27/30
1/19 [>.............................] - ETA: 0s - loss: 0.0602 - mean_absolute_error: 0.0602 19/19 [==============================] - 0s 2ms/step - loss: 0.0558 - mean_absolute_error: 0.0558 - val_loss: 0.0520 - val_mean_absolute_error: 0.0520
Epoch 28/30
1/19 [>.............................] - ETA: 0s - loss: 0.0631 - mean_absolute_error: 0.0631 19/19 [==============================] - 0s 2ms/step - loss: 0.0557 - mean_absolute_error: 0.0557 - val_loss: 0.0528 - val_mean_absolute_error: 0.0528
Epoch 29/30
1/19 [>.............................] - ETA: 0s - loss: 0.0601 - mean_absolute_error: 0.0601 19/19 [==============================] - 0s 2ms/step - loss: 0.0526 - mean_absolute_error: 0.0526 - val_loss: 0.0521 - val_mean_absolute_error: 0.0521
Epoch 30/30
1/19 [>.............................] - ETA: 0s - loss: 0.0508 - mean_absolute_error: 0.0508 19/19 [==============================] - 0s 2ms/step - loss: 0.0542 - mean_absolute_error: 0.0542 - val_loss: 0.0534 - val_mean_absolute_error: 0.0534
views 1
dtype: int32
views 488
dtype: int32
likes 1
dtype: int32
likes 3345
dtype: int32
114831.63920784603
114831.63920784603

3
my_runs/3/info.json Normal file
View File

@ -0,0 +1,3 @@
{
"prepare_message_ts": "2021-05-20 22:06:00.289863"
}

13
my_runs/3/metrics.json Normal file
View File

@ -0,0 +1,13 @@
{
"training.metrics": {
"steps": [
0
],
"timestamps": [
"2021-05-20T20:06:03.338305"
],
"values": [
114831.63920784603
]
}
}

79
my_runs/3/run.json Normal file
View File

@ -0,0 +1,79 @@
{
"artifacts": [],
"command": "my_main",
"experiment": {
"base_dir": "C:\\Users\\karol\\PycharmProjects\\ium_434765",
"dependencies": [
"numpy==1.19.5",
"pandas==1.2.4",
"sacred==0.8.2",
"scikit-learn==0.24.1",
"tensorflow==2.5.0rc1"
],
"mainfile": "neural_network.py",
"name": "sacred_scopes",
"repositories": [
{
"commit": "b0346d0b62846839e512344b20a566135e07a4b2",
"dirty": true,
"url": "https://git.wmi.amu.edu.pl/s434765/ium_434765.git"
}
],
"sources": [
[
"neural_network.py",
"_sources\\neural_network_33e5177d0655bf5fef22fcd226db36b1.py"
]
]
},
"heartbeat": "2021-05-20T20:06:03.339305",
"host": {
"ENV": {},
"cpu": "Unknown",
"gpus": {
"driver_version": "452.06",
"gpus": [
{
"model": "GeForce GTX 1650 Ti",
"persistence_mode": false,
"total_memory": 4096
}
]
},
"hostname": "DESKTOP-5PRPHO6",
"os": [
"Windows",
"Windows-10-10.0.19041-SP0"
],
"python_version": "3.9.2"
},
"meta": {
"command": "my_main",
"options": {
"--beat-interval": null,
"--capture": null,
"--comment": null,
"--debug": false,
"--enforce_clean": false,
"--file_storage": null,
"--force": false,
"--help": false,
"--loglevel": null,
"--mongo_db": null,
"--name": null,
"--pdb": false,
"--print-config": false,
"--priority": null,
"--queue": false,
"--s3": null,
"--sql": null,
"--tiny_db": null,
"--unobserved": false
}
},
"resources": [],
"result": null,
"start_time": "2021-05-20T20:06:00.285864",
"status": "COMPLETED",
"stop_time": "2021-05-20T20:06:03.339305"
}

View File

@ -0,0 +1,53 @@
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
from tensorflow import keras
import matplotlib.pyplot as plt
def evaluate_model():
model = keras.models.load_model('model')
data = pd.read_csv("data_dev", sep=',', error_bad_lines=False,
skip_blank_lines=True, nrows=527, names=["video_id", "last_trending_date",
"publish_date", "publish_hour", "category_id",
"channel_title", "views", "likes", "dislikes",
"comment_count"]).dropna()
X_test = data.loc[:, data.columns == "views"].astype(int)
y_test = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X_test)
max_val_sub = np.max(X_test)
X_test = (X_test - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y_test)
max_val_like = np.max(y_test)
print(min_val_like)
print(max_val_like)
prediction = model.predict(X_test)
prediction_denormalized = []
for pred in prediction:
denorm = pred[0] * (max_val_like[0] - min_val_like[0]) + min_val_like[0]
prediction_denormalized.append(denorm)
f = open("predictions.txt", "w")
for (pred, test) in zip(prediction_denormalized, y_test.values):
f.write("predicted: %s expected: %s\n" % (str(pred), str(test[0])))
error = mean_squared_error(y_test, prediction_denormalized)
print(error)
with open("rmse.txt", "a") as file:
file.write(str(error) + "\n")
with open("rmse.txt", "r") as file:
lines = file.readlines()
plt.plot(range(len(lines)), [line[:-2] for line in lines])
plt.tight_layout()
plt.ylabel('RMSE')
plt.xlabel('evaluation no')
plt.savefig('evaluation.png')
return error

View File

@ -0,0 +1,108 @@
from datetime import datetime
import pandas as pd
import numpy as np
from sacred.observers import FileStorageObserver, MongoObserver
from sacred import Experiment
from sklearn.metrics import mean_squared_error
from tensorflow import keras
ex = Experiment("sacred_scopes", interactive=True)
# ex.observers.append(MongoObserver(url='mongodb://mongo_user:mongo_password_IUM_2021@172.17.0.1:27017',
# db_name='sacred'))
ex.observers.append(FileStorageObserver('my_runs'))
@ex.config
def my_config():
epochs_amount = 30
def normalize_data(data):
return (data - np.min(data)) / (np.max(data) - np.min(data))
@ex.capture
def prepare_model(epochs_amount, _run):
_run.info["prepare_message_ts"] = str(datetime.now())
data = pd.read_csv("data_train", sep=',', skip_blank_lines=True, nrows=1087, error_bad_lines=False,
names=["vipip install sacreddeo_id", "last_trending_date", "publish_date", "publish_hour",
"category_id",
"channel_title", "views", "likes", "dislikes", "comment_count"]).dropna()
X = data.loc[:, data.columns == "views"].astype(int)
y = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X)
max_val_sub = np.max(X)
X = (X - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y)
max_val_like = np.max(y)
y = (y - min_val_like) / (max_val_like - min_val_like)
print(min_val_like)
print(max_val_like)
model = keras.Sequential([
keras.layers.Dense(512, input_dim=X.shape[1], activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(1, activation='linear'),
])
model.compile(loss='mean_absolute_error', optimizer="Adam", metrics=['mean_absolute_error'])
model.fit(X, y, epochs=int(epochs_amount), validation_split=0.3)
data = pd.read_csv("data_dev", sep=',', error_bad_lines=False,
skip_blank_lines=True, nrows=527, names=["video_id", "last_trending_date",
"publish_date", "publish_hour", "category_id",
"channel_title", "views", "likes", "dislikes",
"comment_count"]).dropna()
X_test = data.loc[:, data.columns == "views"].astype(int)
y_test = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X_test)
max_val_sub = np.max(X_test)
X_test = (X_test - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y_test)
max_val_like = np.max(y_test)
print(min_val_like)
print(max_val_like)
prediction = model.predict(X_test)
prediction_denormalized = []
for pred in prediction:
denorm = pred[0] * (max_val_like[0] - min_val_like[0]) + min_val_like[0]
prediction_denormalized.append(denorm)
f = open("predictions.txt", "w")
for (pred, test) in zip(prediction_denormalized, y_test.values):
f.write("predicted: %s expected: %s\n" % (str(pred), str(test[0])))
error = mean_squared_error(y_test, prediction_denormalized)
print(error)
model.save('model')
_run.log_scalar("training.metrics", error)
return error
@ex.main
def my_main(epochs_amount):
print(prepare_model())
ex.run()
ex.add_artifact("model.pb")

View File

@ -0,0 +1,79 @@
from datetime import datetime
import pandas as pd
import numpy as np
from sacred.observers import FileStorageObserver, MongoObserver
from sacred import Experiment
from tensorflow import keras
import sys
from evaluate_network import evaluate_model
ex = Experiment("sacred_scopes", interactive=True)
# ex.observers.append(MongoObserver(url='mongodb://mongo_user:mongo_password_IUM_2021@172.17.0.1:27017',
# db_name='sacred'))
ex.observers.append(FileStorageObserver('my_runs'))
@ex.config
def my_config():
epochs_amount = 30
def normalize_data(data):
return (data - np.min(data)) / (np.max(data) - np.min(data))
@ex.capture
def prepare_model(epochs_amount, _run):
_run.info["prepare_message_ts"] = str(datetime.now())
data = pd.read_csv("data_train", sep=',', skip_blank_lines=True, nrows=1087, error_bad_lines=False,
names=["vipip install sacreddeo_id", "last_trending_date", "publish_date", "publish_hour",
"category_id",
"channel_title", "views", "likes", "dislikes", "comment_count"]).dropna()
X = data.loc[:, data.columns == "views"].astype(int)
y = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X)
max_val_sub = np.max(X)
X = (X - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y)
max_val_like = np.max(y)
y = (y - min_val_like) / (max_val_like - min_val_like)
print(min_val_like)
print(max_val_like)
model = keras.Sequential([
keras.layers.Dense(512, input_dim=X.shape[1], activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(1, activation='linear'),
])
model.compile(loss='mean_absolute_error', optimizer="Adam", metrics=['mean_absolute_error'])
model.fit(X, y, epochs=int(epochs_amount), validation_split=0.3)
model.save('model')
metrics = evaluate_model()
print(metrics)
return metrics
@ex.main
def my_main(epochs_amount):
print(prepare_model())
ex.run()
ex.add_artifact("model.pb")

View File

@ -0,0 +1,78 @@
from datetime import datetime
import pandas as pd
import numpy as np
from sacred.observers import FileStorageObserver, MongoObserver
from sacred import Experiment
from tensorflow import keras
import sys
from evaluate_network import evaluate_model
ex = Experiment("sacred_scopes", interactive=True)
# ex.observers.append(MongoObserver(url='mongodb://mongo_user:mongo_password_IUM_2021@172.17.0.1:27017',
# db_name='sacred'))
ex.observers.append(FileStorageObserver('my_runs'))
@ex.config
def my_config():
epochs_amount = 30
def normalize_data(data):
return (data - np.min(data)) / (np.max(data) - np.min(data))
@ex.capture
def prepare_model(epochs_amount, _run):
_run.info["prepare_message_ts"] = str(datetime.now())
data = pd.read_csv("data_train", sep=',', skip_blank_lines=True, nrows=1087, error_bad_lines=False,
names=["vipip install sacreddeo_id", "last_trending_date", "publish_date", "publish_hour",
"category_id",
"channel_title", "views", "likes", "dislikes", "comment_count"]).dropna()
X = data.loc[:, data.columns == "views"].astype(int)
y = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X)
max_val_sub = np.max(X)
X = (X - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y)
max_val_like = np.max(y)
y = (y - min_val_like) / (max_val_like - min_val_like)
print(min_val_like)
print(max_val_like)
model = keras.Sequential([
keras.layers.Dense(512, input_dim=X.shape[1], activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(1, activation='linear'),
])
model.compile(loss='mean_absolute_error', optimizer="Adam", metrics=['mean_absolute_error'])
model.fit(X, y, epochs=int(epochs_amount), validation_split=0.3)
model.save('model')
metrics = evaluate_model()
_run.log_scalar("training.metrics", metrics)
return metrics
@ex.main
def my_main(epochs_amount):
print(prepare_model())
ex.run()
ex.add_artifact("model.pb")

109
neural_network.py Normal file
View File

@ -0,0 +1,109 @@
import sys
from datetime import datetime
import pandas as pd
import numpy as np
from sacred.observers import FileStorageObserver, MongoObserver
from sacred import Experiment
from sklearn.metrics import mean_squared_error
from tensorflow import keras
ex = Experiment("s434765", interactive=True, save_git_info=False)
ex.observers.append(MongoObserver(url='mongodb://mongo_user:mongo_password_IUM_2021@172.17.0.1:27017',
db_name='sacred'))
ex.observers.append(FileStorageObserver('my_runs'))
@ex.config
def my_config():
epochs_amount = int(sys.argv[1])
def normalize_data(data):
return (data - np.min(data)) / (np.max(data) - np.min(data))
@ex.capture
def prepare_model(epochs_amount, _run):
_run.info["prepare_message_ts"] = str(datetime.now())
data = pd.read_csv("data_train", sep=',', skip_blank_lines=True, nrows=1087, error_bad_lines=False,
names=["vipip install sacreddeo_id", "last_trending_date", "publish_date", "publish_hour",
"category_id",
"channel_title", "views", "likes", "dislikes", "comment_count"]).dropna()
X = data.loc[:, data.columns == "views"].astype(int)
y = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X)
max_val_sub = np.max(X)
X = (X - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y)
max_val_like = np.max(y)
y = (y - min_val_like) / (max_val_like - min_val_like)
print(min_val_like)
print(max_val_like)
model = keras.Sequential([
keras.layers.Dense(512, input_dim=X.shape[1], activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(1, activation='linear'),
])
model.compile(loss='mean_absolute_error', optimizer="Adam", metrics=['mean_absolute_error'])
model.fit(X, y, epochs=int(epochs_amount), validation_split=0.3)
data = pd.read_csv("data_dev", sep=',', error_bad_lines=False,
skip_blank_lines=True, nrows=527, names=["video_id", "last_trending_date",
"publish_date", "publish_hour", "category_id",
"channel_title", "views", "likes", "dislikes",
"comment_count"]).dropna()
X_test = data.loc[:, data.columns == "views"].astype(int)
y_test = data.loc[:, data.columns == "likes"].astype(int)
min_val_sub = np.min(X_test)
max_val_sub = np.max(X_test)
X_test = (X_test - min_val_sub) / (max_val_sub - min_val_sub)
print(min_val_sub)
print(max_val_sub)
min_val_like = np.min(y_test)
max_val_like = np.max(y_test)
print(min_val_like)
print(max_val_like)
prediction = model.predict(X_test)
prediction_denormalized = []
for pred in prediction:
denorm = pred[0] * (max_val_like[0] - min_val_like[0]) + min_val_like[0]
prediction_denormalized.append(denorm)
f = open("predictions.txt", "w")
for (pred, test) in zip(prediction_denormalized, y_test.values):
f.write("predicted: %s expected: %s\n" % (str(pred), str(test[0])))
error = mean_squared_error(y_test, prediction_denormalized)
print(error)
model.save('model')
_run.log_scalar("training.metrics", error)
return error
@ex.main
def my_main(epochs_amount):
print(prepare_model())
ex.run()
ex.add_artifact("model/saved_model.pb")

2
neural_network.sh Executable file
View File

@ -0,0 +1,2 @@
#!/bin/bash
python3 neural_network.py $1

422
predictions.txt Normal file
View File

@ -0,0 +1,422 @@
predicted: 400.330885887146 expected: 617
predicted: 27.162359654903412 expected: 172
predicted: 1451.7506175041199 expected: 611
predicted: 89.5334190428257 expected: 269
predicted: 1451.7506175041199 expected: 1095
predicted: 89.5334190428257 expected: 68
predicted: 26.76018589735031 expected: 5
predicted: 400.330885887146 expected: 986
predicted: 179.86350238323212 expected: 262
predicted: 357.96860003471375 expected: 817
predicted: 208.96947646141052 expected: 197
predicted: 151.92037403583527 expected: 264
predicted: 311.94646322727203 expected: 830
predicted: 1451.7506175041199 expected: 1415
predicted: 308.85175335407257 expected: 134
predicted: 26.881321370601654 expected: 58
predicted: 29.60762630403042 expected: 93
predicted: 473.10275983810425 expected: 830
predicted: 1451.7506175041199 expected: 1207
predicted: 318.1358331441879 expected: 269
predicted: 454.97389698028564 expected: 558
predicted: 1308.27658700943 expected: 1558
predicted: 27.31346444785595 expected: 37
predicted: 476.11863946914673 expected: 364
predicted: 494.2209930419922 expected: 1020
predicted: 26.76018589735031 expected: 11
predicted: 351.91593730449677 expected: 225
predicted: 476.11863946914673 expected: 228
predicted: 1308.27658700943 expected: 1184
predicted: 336.6213505268097 expected: 370
predicted: 27.781967476010323 expected: 68
predicted: 144.74467933177948 expected: 201
predicted: 1451.7506175041199 expected: 1113
predicted: 336.6213505268097 expected: 496
predicted: 27.781964361667633 expected: 43
predicted: 30.40837675333023 expected: 59
predicted: 27.781964361667633 expected: 60
predicted: 31.24549649655819 expected: 78
predicted: 231.2072286605835 expected: 263
predicted: 318.1358331441879 expected: 400
predicted: 1451.7506175041199 expected: 1256
predicted: 27.781964361667633 expected: 23
predicted: 1451.7506175041199 expected: 3345
predicted: 35.02500361204147 expected: 98
predicted: 530.2856295108795 expected: 238
predicted: 39.78039000928402 expected: 69
predicted: 351.91593730449677 expected: 170
predicted: 26.79070645570755 expected: 31
predicted: 43.6776317358017 expected: 102
predicted: 1451.7506175041199 expected: 1070
predicted: 115.7426495552063 expected: 96
predicted: 433.7032353878021 expected: 387
predicted: 27.035397246479988 expected: 25
predicted: 418.5240786075592 expected: 574
predicted: 357.96860003471375 expected: 165
predicted: 397.3030471801758 expected: 765
predicted: 473.10275983810425 expected: 599
predicted: 454.97389698028564 expected: 906
predicted: 33.45454025268555 expected: 71
predicted: 409.42135322093964 expected: 433
predicted: 409.42135322093964 expected: 152
predicted: 60.30296468734741 expected: 116
predicted: 26.76018589735031 expected: 19
predicted: 26.76018589735031 expected: 24
predicted: 43.6776317358017 expected: 97
predicted: 60.30296468734741 expected: 49
predicted: 530.2856295108795 expected: 291
predicted: 1451.7506175041199 expected: 2816
predicted: 351.91593730449677 expected: 152
predicted: 473.10275983810425 expected: 1033
predicted: 454.97389698028564 expected: 740
predicted: 29.60762630403042 expected: 32
predicted: 46.188458412885666 expected: 74
predicted: 530.2856295108795 expected: 453
predicted: 351.91593730449677 expected: 219
predicted: 100.77534905076027 expected: 82
predicted: 123.19417536258698 expected: 72
predicted: 27.514323979616165 expected: 109
predicted: 400.330885887146 expected: 567
predicted: 271.7156335115433 expected: 389
predicted: 29.60762630403042 expected: 70
predicted: 1451.7506175041199 expected: 987
predicted: 1451.7506175041199 expected: 1812
predicted: 476.11863946914673 expected: 169
predicted: 234.37723088264465 expected: 270
predicted: 26.770161136984825 expected: 33
predicted: 81.62594100832939 expected: 75
predicted: 1451.7506175041199 expected: 1424
predicted: 26.76018589735031 expected: 39
predicted: 26.770161136984825 expected: 49
predicted: 26.79070645570755 expected: 141
predicted: 26.76018589735031 expected: 24
predicted: 100.77534905076027 expected: 91
predicted: 189.9326456785202 expected: 101
predicted: 494.2209930419922 expected: 401
predicted: 494.2209930419922 expected: 570
predicted: 104.52815690636635 expected: 106
predicted: 26.76077450811863 expected: 43
predicted: 530.2856295108795 expected: 439
predicted: 1308.27658700943 expected: 1220
predicted: 27.781964361667633 expected: 82
predicted: 26.76018589735031 expected: 5
predicted: 476.11863946914673 expected: 314
predicted: 336.6213505268097 expected: 180
predicted: 28.17588511109352 expected: 88
predicted: 1451.7506175041199 expected: 1642
predicted: 400.330885887146 expected: 417
predicted: 256.2422585487366 expected: 346
predicted: 26.76018589735031 expected: 10
predicted: 115.7426495552063 expected: 93
predicted: 26.79070645570755 expected: 26
predicted: 52.00283966958523 expected: 41
predicted: 1451.7506175041199 expected: 505
predicted: 26.76018589735031 expected: 11
predicted: 1451.7506175041199 expected: 929
predicted: 357.96860003471375 expected: 877
predicted: 39.78039000928402 expected: 370
predicted: 26.76018589735031 expected: 28
predicted: 1451.7506175041199 expected: 1085
predicted: 1308.27658700943 expected: 654
predicted: 28.17588511109352 expected: 59
predicted: 144.74467933177948 expected: 259
predicted: 321.2292972803116 expected: 165
predicted: 26.76018589735031 expected: 1
predicted: 26.76018589735031 expected: 26
predicted: 100.77534905076027 expected: 399
predicted: 397.3030720949173 expected: 155
predicted: 137.58719730377197 expected: 158
predicted: 433.7032353878021 expected: 782
predicted: 141.16236305236816 expected: 376
predicted: 26.76018589735031 expected: 1
predicted: 193.0980635881424 expected: 116
predicted: 256.2422585487366 expected: 628
predicted: 1451.7506175041199 expected: 1897
predicted: 73.2502589225769 expected: 76
predicted: 569.2618026733398 expected: 450
predicted: 234.37723088264465 expected: 272
predicted: 351.91593730449677 expected: 149
predicted: 1308.27658700943 expected: 1069
predicted: 176.44806504249573 expected: 286
predicted: 137.58719730377197 expected: 526
predicted: 27.31346444785595 expected: 29
predicted: 256.2422585487366 expected: 373
predicted: 100.77534905076027 expected: 481
predicted: 89.5334190428257 expected: 74
predicted: 27.035397246479988 expected: 54
predicted: 108.27169024944305 expected: 102
predicted: 26.76018589735031 expected: 22
predicted: 1451.7506175041199 expected: 1360
predicted: 433.7032353878021 expected: 324
predicted: 397.3030471801758 expected: 973
predicted: 400.330885887146 expected: 407
predicted: 186.616468667984 expected: 317
predicted: 26.76018589735031 expected: 76
predicted: 418.5240786075592 expected: 688
predicted: 26.944442868232727 expected: 44
predicted: 318.1358331441879 expected: 239
predicted: 433.7032353878021 expected: 344
predicted: 418.5240786075592 expected: 688
predicted: 351.91593730449677 expected: 442
predicted: 68.93387961387634 expected: 157
predicted: 1451.7506175041199 expected: 1193
predicted: 32.178858771920204 expected: 102
predicted: 397.3030471801758 expected: 768
predicted: 28.805081993341446 expected: 42
predicted: 271.7156335115433 expected: 265
predicted: 27.31346444785595 expected: 15
predicted: 27.781964361667633 expected: 44
predicted: 26.76018589735031 expected: 1
predicted: 351.91593730449677 expected: 216
predicted: 26.79070645570755 expected: 20
predicted: 530.2856295108795 expected: 308
predicted: 26.881321370601654 expected: 29
predicted: 35.02500361204147 expected: 208
predicted: 126.88576769828796 expected: 99
predicted: 32.178858771920204 expected: 48
predicted: 26.76018589735031 expected: 15
predicted: 400.330885887146 expected: 630
predicted: 271.7156335115433 expected: 333
predicted: 26.826289378106594 expected: 55
predicted: 28.17588511109352 expected: 52
predicted: 30.40837675333023 expected: 27
predicted: 26.944442868232727 expected: 16
predicted: 530.2856295108795 expected: 472
predicted: 351.91593730449677 expected: 162
predicted: 1451.7506175041199 expected: 1054
predicted: 108.27169024944305 expected: 223
predicted: 26.76018589735031 expected: 22
predicted: 1451.7506175041199 expected: 3345
predicted: 73.2502589225769 expected: 71
predicted: 476.11863946914673 expected: 862
predicted: 27.514320865273476 expected: 26
predicted: 1308.27658700943 expected: 624
predicted: 28.805081993341446 expected: 24
predicted: 26.79070645570755 expected: 115
predicted: 1451.7506175041199 expected: 3131
predicted: 30.40837675333023 expected: 27
predicted: 1451.7506175041199 expected: 1116
predicted: 234.37723088264465 expected: 501
predicted: 1308.27658700943 expected: 1380
predicted: 433.7032353878021 expected: 538
predicted: 26.944442868232727 expected: 77
predicted: 253.14989066123962 expected: 270
predicted: 1308.27658700943 expected: 618
predicted: 530.2856295108795 expected: 335
predicted: 400.330885887146 expected: 550
predicted: 89.5334190428257 expected: 169
predicted: 400.330885887146 expected: 653
predicted: 27.035397246479988 expected: 21
predicted: 189.9326456785202 expected: 225
predicted: 1451.7506175041199 expected: 2192
predicted: 357.96860003471375 expected: 213
predicted: 409.42135322093964 expected: 695
predicted: 26.761529736220837 expected: 23
predicted: 68.93387961387634 expected: 148
predicted: 123.19417536258698 expected: 57
predicted: 27.514323979616165 expected: 42
predicted: 123.19417536258698 expected: 195
predicted: 141.16236305236816 expected: 172
predicted: 494.2209930419922 expected: 220
predicted: 166.22088754177094 expected: 112
predicted: 26.826289378106594 expected: 14
predicted: 351.91593730449677 expected: 314
predicted: 30.40837363898754 expected: 47
predicted: 454.97389698028564 expected: 836
predicted: 409.42135322093964 expected: 375
predicted: 400.330885887146 expected: 501
predicted: 360.99444556236267 expected: 392
predicted: 409.42135322093964 expected: 824
predicted: 56.03865718841553 expected: 220
predicted: 26.76018589735031 expected: 3
predicted: 224.82746028900146 expected: 307
predicted: 26.76018589735031 expected: 18
predicted: 29.60762630403042 expected: 93
predicted: 318.1358331441879 expected: 180
predicted: 26.76018589735031 expected: 3
predicted: 530.2856295108795 expected: 297
predicted: 494.2209930419922 expected: 576
predicted: 530.2856295108795 expected: 314
predicted: 193.0980635881424 expected: 139
predicted: 29.60762630403042 expected: 105
predicted: 318.1358331441879 expected: 231
predicted: 26.826289378106594 expected: 12
predicted: 1308.27658700943 expected: 1026
predicted: 318.1358331441879 expected: 304
predicted: 26.76018589735031 expected: 3
predicted: 357.96860003471375 expected: 335
predicted: 56.03865718841553 expected: 110
predicted: 26.79070645570755 expected: 43
predicted: 43.6776317358017 expected: 113
predicted: 400.330885887146 expected: 487
predicted: 357.96860003471375 expected: 541
predicted: 212.1372114419937 expected: 114
predicted: 26.826289378106594 expected: 101
predicted: 179.86350238323212 expected: 251
predicted: 1451.7506175041199 expected: 1358
predicted: 360.99444556236267 expected: 1031
predicted: 1451.7506175041199 expected: 1788
predicted: 186.616468667984 expected: 137
predicted: 26.76018589735031 expected: 29
predicted: 30.40837675333023 expected: 68
predicted: 476.11863946914673 expected: 442
predicted: 26.944442868232727 expected: 24
predicted: 454.97389698028564 expected: 1129
predicted: 26.76018589735031 expected: 35
predicted: 33.45454025268555 expected: 181
predicted: 530.2856295108795 expected: 894
predicted: 26.76018589735031 expected: 49
predicted: 68.93387961387634 expected: 170
predicted: 123.19417536258698 expected: 196
predicted: 1451.7506175041199 expected: 3345
predicted: 30.40837675333023 expected: 24
predicted: 433.7032353878021 expected: 629
predicted: 530.2856295108795 expected: 290
predicted: 433.7032353878021 expected: 342
predicted: 108.27169024944305 expected: 177
predicted: 26.944442868232727 expected: 57
predicted: 1308.27658700943 expected: 707
predicted: 228.01589941978455 expected: 289
predicted: 27.514323979616165 expected: 78
predicted: 357.96860003471375 expected: 530
predicted: 179.86350238323212 expected: 276
predicted: 400.330885887146 expected: 389
predicted: 27.781964361667633 expected: 173
predicted: 530.2856295108795 expected: 717
predicted: 476.11863946914673 expected: 707
predicted: 530.2856295108795 expected: 440
predicted: 26.761529736220837 expected: 36
predicted: 35.02500361204147 expected: 115
predicted: 100.77534905076027 expected: 437
predicted: 30.40837675333023 expected: 75
predicted: 1451.7506175041199 expected: 611
predicted: 27.035397246479988 expected: 17
predicted: 26.76018589735031 expected: 52
predicted: 476.11863946914673 expected: 849
predicted: 397.3030471801758 expected: 230
predicted: 357.96860003471375 expected: 537
predicted: 1451.7506175041199 expected: 1645
predicted: 357.96860003471375 expected: 221
predicted: 104.52815690636635 expected: 167
predicted: 397.3030471801758 expected: 274
predicted: 137.58719730377197 expected: 141
predicted: 530.2856295108795 expected: 414
predicted: 26.76018589735031 expected: 32
predicted: 357.96860003471375 expected: 203
predicted: 26.76018589735031 expected: 18
predicted: 179.86350238323212 expected: 212
predicted: 27.514323979616165 expected: 29
predicted: 1451.7506175041199 expected: 1665
predicted: 351.91593730449677 expected: 192
predicted: 26.944442868232727 expected: 24
predicted: 186.616468667984 expected: 175
predicted: 1451.7506175041199 expected: 1329
predicted: 494.2209930419922 expected: 261
predicted: 357.96860003471375 expected: 712
predicted: 60.30296468734741 expected: 52
predicted: 351.91593730449677 expected: 157
predicted: 218.47828722000122 expected: 285
predicted: 311.94648814201355 expected: 405
predicted: 318.1358082294464 expected: 452
predicted: 1451.7506175041199 expected: 1267
predicted: 26.76018589735031 expected: 50
predicted: 81.62594100832939 expected: 150
predicted: 176.44806504249573 expected: 255
predicted: 26.770161136984825 expected: 18
predicted: 26.76018589735031 expected: 4
predicted: 430.66587924957275 expected: 437
predicted: 26.76018589735031 expected: 24
predicted: 26.944442868232727 expected: 71
predicted: 530.2856295108795 expected: 532
predicted: 476.11863946914673 expected: 729
predicted: 26.826289378106594 expected: 35
predicted: 454.97389698028564 expected: 368
predicted: 26.76018589735031 expected: 12
predicted: 1451.7506175041199 expected: 2034
predicted: 433.7032353878021 expected: 391
predicted: 357.96860003471375 expected: 560
predicted: 530.2856295108795 expected: 1011
predicted: 454.97389698028564 expected: 600
predicted: 186.616468667984 expected: 167
predicted: 26.76018589735031 expected: 34
predicted: 27.035397246479988 expected: 47
predicted: 1451.7506175041199 expected: 1148
predicted: 271.7156335115433 expected: 326
predicted: 1451.7506175041199 expected: 876
predicted: 26.76018589735031 expected: 10
predicted: 1451.7506175041199 expected: 3345
predicted: 409.42135322093964 expected: 993
predicted: 39.78039000928402 expected: 49
predicted: 112.0236759185791 expected: 230
predicted: 433.7032353878021 expected: 679
predicted: 1451.7506175041199 expected: 2201
predicted: 141.16236305236816 expected: 202
predicted: 569.2618026733398 expected: 663
predicted: 56.03865718841553 expected: 79
predicted: 308.85175335407257 expected: 214
predicted: 409.42135322093964 expected: 829
predicted: 30.40837675333023 expected: 149
predicted: 357.96860003471375 expected: 729
predicted: 27.781964361667633 expected: 19
predicted: 231.2072286605835 expected: 173
predicted: 397.3030471801758 expected: 240
predicted: 81.62594100832939 expected: 89
predicted: 26.826289378106594 expected: 49
predicted: 400.330885887146 expected: 228
predicted: 1451.7506175041199 expected: 651
predicted: 26.76077450811863 expected: 15
predicted: 43.6776317358017 expected: 61
predicted: 27.31346444785595 expected: 84
predicted: 26.826289378106594 expected: 36
predicted: 68.93387961387634 expected: 101
predicted: 293.37837839126587 expected: 184
predicted: 311.94646322727203 expected: 268
predicted: 1451.7506175041199 expected: 2910
predicted: 27.31346444785595 expected: 106
predicted: 271.7156335115433 expected: 433
predicted: 1451.7506175041199 expected: 1700
predicted: 27.162359654903412 expected: 41
predicted: 26.76018589735031 expected: 1
predicted: 400.330885887146 expected: 520
predicted: 26.76018589735031 expected: 50
predicted: 433.7032353878021 expected: 734
predicted: 26.761529736220837 expected: 45
predicted: 1451.7506175041199 expected: 2837
predicted: 27.31346444785595 expected: 23
predicted: 89.5334190428257 expected: 145
predicted: 530.2856295108795 expected: 185
predicted: 26.76018589735031 expected: 42
predicted: 208.96947646141052 expected: 410
predicted: 1451.7506175041199 expected: 1622
predicted: 409.42135322093964 expected: 661
predicted: 26.76018589735031 expected: 4
predicted: 293.37837839126587 expected: 369
predicted: 253.14989066123962 expected: 221
predicted: 293.37837839126587 expected: 234
predicted: 104.52815690636635 expected: 380
predicted: 357.96860003471375 expected: 249
predicted: 26.76018589735031 expected: 25
predicted: 1451.7506175041199 expected: 1876
predicted: 253.14989066123962 expected: 241
predicted: 199.41325294971466 expected: 334
predicted: 250.05866885185242 expected: 303
predicted: 26.76018589735031 expected: 19
predicted: 1451.7506175041199 expected: 1248
predicted: 100.77534905076027 expected: 501
predicted: 433.7032353878021 expected: 328
predicted: 256.2422585487366 expected: 406
predicted: 137.58719730377197 expected: 141
predicted: 100.77534905076027 expected: 408
predicted: 26.76018589735031 expected: 4
predicted: 1451.7506175041199 expected: 3147
predicted: 29.60762630403042 expected: 99
predicted: 179.86350238323212 expected: 89
predicted: 28.805081993341446 expected: 61
predicted: 26.944442868232727 expected: 27
predicted: 1451.7506175041199 expected: 1088
predicted: 29.60762318968773 expected: 105
predicted: 85.75156059861183 expected: 173
predicted: 1308.27658700943 expected: 1496
predicted: 530.2856295108795 expected: 866
predicted: 494.2210428714752 expected: 399
predicted: 250.05866885185242 expected: 317

25
rmse.txt Normal file
View File

@ -0,0 +1,25 @@
109845.55756236914
104845.55756236914
109845.55756236914
109845.55756236914
104845.55756236914
19845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
109845.55756236914
114882.99377127373
129787.96004765884