dev-0 | ||
src | ||
test-A | ||
train | ||
.gitignore | ||
config.txt | ||
geval | ||
in-header.tsv | ||
out-header.tsv | ||
README.md | ||
run.py |
Challenging America word-gap prediction
This task is to predict the word-gap between two sentences.
Evaluation
PerplexityHashed is the metric so to check the performance of the model. The lower the perplexity, the better the model. To run evaluation run the following command:
./geval --metric PerplexityHashed --test-name dev-0
Perplexity calculated on dev-0
is equal 981.69