challenging-america-word-ga.../.ipynb_checkpoints/run-checkpoint.ipynb
2022-04-11 11:05:46 +02:00

7.6 KiB

KENLM_BUILD_PATH='/home/students/s434708/kenlm/build'

Preprocessing danych

import pandas as pd
import csv
import regex as re
def clean_text(text):
    text = text.lower().replace('-\\\\n', '').replace('\\\\n', ' ')
    text = re.sub(r'\p{P}', '', text)

    return text
train_data = pd.read_csv('train/in.tsv.xz', sep='\t', error_bad_lines=False, warn_bad_lines=False, header=None, quoting=csv.QUOTE_NONE)
train_labels = pd.read_csv('train/expected.tsv', sep='\t', error_bad_lines=False, warn_bad_lines=False, header=None, quoting=csv.QUOTE_NONE)

train_data = train_data[[6, 7]]
train_data = pd.concat([train_data, train_labels], axis=1)

train_data['text'] = train_data[6] + train_data[0] + train_data[7]
train_data = train_data[['text']]

with open('processed_train.txt', 'w') as file:
    for _, row in train_data.iterrows():
        text = clean_text(str(row['text']))
        file.write(text + '\n')

Model kenLM

!$KENLM_BUILD_PATH/bin/lmplz -o 5 --skip_symbols < processed_train.txt > model.arpa
=== 1/5 Counting and sorting n-grams ===
Reading /home/students/s434708/Desktop/Modelowanie Języka/challenging-america-word-gap-prediction-kenlm/processed_train.txt
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
********************************Warning: <s> appears in the input.  All instances of <s>, </s>, and <unk> will be interpreted as whitespace.
********************************************************************
Unigram tokens 135911223 types 4381594
=== 2/5 Calculating and sorting adjusted counts ===
Chain sizes: 1:52579128 2:1295655936 3:2429355008 4:3886967808 5:5668495360
Statistics:
1 4381594 D1=0.841838 D2=1.01787 D3+=1.21057
2 26800631 D1=0.836734 D2=1.01657 D3+=1.19437
3 69811700 D1=0.878562 D2=1.11227 D3+=1.27889
4 104063034 D1=0.931257 D2=1.23707 D3+=1.36664
5 119487533 D1=0.938146 D2=1.3058 D3+=1.41614
Memory estimate for binary LM:
type      MB
probing 6752 assuming -p 1.5
probing 7917 assuming -r models -p 1.5
trie    3572 without quantization
trie    2120 assuming -q 8 -b 8 quantization 
trie    3104 assuming -a 22 array pointer compression
trie    1652 assuming -a 22 -q 8 -b 8 array pointer compression and quantization
=== 3/5 Calculating and sorting initial probabilities ===
Chain sizes: 1:52579128 2:428810096 3:1396234000 4:2497512816 5:3345650924
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
####################################################################################################
=== 4/5 Calculating and writing order-interpolated probabilities ===
Chain sizes: 1:52579128 2:428810096 3:1396234000 4:2497512816 5:3345650924
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
####################################################################################################
=== 5/5 Writing ARPA model ===
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
----------------------------------------------------------------------------------------------------Last input should have been poison.  The program should end soon with an error.  If it doesn't, there's a bug.
terminate called after throwing an instance of 'util::FDException'
  what():  /home/students/s434708/kenlm/util/file.cc:228 in void util::WriteOrThrow(int, const void*, std::size_t) threw FDException because `ret < 1'.
No space left on device in /home/students/s434708/Desktop/Modelowanie Języka/challenging-america-word-gap-prediction-kenlm/model.arpa while writing 8189 bytes
/bin/bash: line 1: 26725 Aborted                 /home/students/s434708/kenlm/build/bin/lmplz -o 5 --skip_symbols < processed_train.txt > model.arpa
!$KENLM_BUILD_PATH/bin/build_binary model.arpa model.binary
Reading model.arpa
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
/home/students/s434708/kenlm/util/file.cc:86 in int util::CreateOrThrow(const char*) threw ErrnoException because `-1 == (ret = open(name, 0100 | 01000 | 02, 0400 | 0200 | (0400 >> 3) | ((0400 >> 3) >> 3)))'.
No space left on device while creating model.binary Byte: 94
ERROR
!rm processed_train.txt
!rm model.arpa

Predykcje

import kenlm
test_str = 'really good'

model = kenlm.Model('model.binary')
print(model.score(test_str, bos = True, eos = True))
for i in model.full_scores(test_str):
    print(i)