'
0.25
#### `p` — filtering by confidence
When you use `p
`, only the top P% entries with the highest
confidence will be considered. For instance, `p<50>` means half of the
items with the largest confidence scores will be considered.
So far only MultiLabel-F-measure format is handled. If more than one
label is given, the geometric mean of all probabilities is used.
### Presentation
Some flags are used not for modifying the result, but rather changing
the way it is presented by GEval (or the associated
[Gonito](https://gonito.net) Web application).
#### `N` — use an alternative name
Sometimes, the metric name gets complicated, you can use the `N<...>`
to get a more human-readable way.
This will be used:
* by GEval when presenting results from more than one metric (when
only one metric is calculated, its name is not given anyway),
* by Gonito, e.g. in table headers.
$ geval -o out.tsv -e expected.tsv --metric Accuracy --metric MultiLabel-F1:N --metric 'MultiLabel-F0:N' --metric 'MultiLabelF9999:N'
Accuracy 0.200
F-score 0.511
Precision 0.462
Recall 0.571
(GEval does not have separate Precision/Recall metrics, but they can
be easily obtained by setting the parameter of the F-score to,
respectively, 0 and a large number.)
More than one name can be given. In such a case, or names will concatenated with spaces.
$ geval --precision 3 -o out.tsv -e expected.tsv --metric 'Accuracy' --metric 'MultiLabel-F1:NNN'
Accuracy 0.200
F-score on tokens 0.511
This is handy, when combined with the `{...}` operator (see below).
#### `P` — set the priority (within the Gonito platform)
This sets the priority level, considered when the results are displayed in the Gonito platform.
It has no effect in GEval as such (it is simply disregarded in GEval).
$ geval --precision 3 -o out.tsv -e expected.tsv --metric 'Accuracy:P<1>' --metric 'MultiLabel-F1:P<3>'
Accuracy:P<1> 0.200
MultiLabel-F1.0:P<3> 0.511
The priority is interpreted by Gonito in the following way:
* 1 — show everywhere, including the main leaderboard table
* 2 — show on the secondary leaderboard table and in detailed information for a submission
* 3 — show only in detailed information for a submission
Although you can specify `P<...>` more than once, only the first value
will be considered for a given metric (this might be important when combined with the `{...}` operator.
### Combining flags
Flags can be combined, just by concatenation (`:` should be given only once):
$ geval -o out.tsv -e expected.tsv -i in.tsv --metric Accuracy --metric 'Accuracy:fcs<\d>N'
Accuracy 0.2
MyWeirdMetric 0.75
Note that the order of flags might be sometimes significant, in
general, they are considered from left to right.
### Cartesian operator `{...}`
Sometimes, you need to define a large number of similar metrics. Then
you can use the special `{...}` operator interpreted by GEval (not
Bash!). For instance `{foo,bar}xyz{aaa,bbb,ccc}` will be internally
considered as the Cartesian product (i.e. you'll get all the
combinations): `fooxyzaaa`, `fooxyzbbb`, `fooxyzccc`, `barxyzaaa`,
`barxyzbbb`, `barxyzccc`.
For example, let's assume that we want accuracy, F-score, precision
and recall in both case-sensitive and case-insensitive versions.
Here's the way to calculate all these 8 metrics in a concise manner:
$ geval --precision 3 -o out.tsv -e expected.tsv -i in.tsv --metric '{Accuracy:N,MultiLabel-F1:N,MultiLabel-F0:N,MultiLabel-F9999:N}N{N,cN}'
sensitive non-sensitive
Acc case 0.200 0.400
F1 case 0.511 0.681
P case 0.462 0.615
R case 0.571 0.762
Note that GEval automagically put the results in a table! (Well,
_case_ probably should be written in headers, but, well, it generates
the table totally on its own.)
## Handling headers
When dealing with TSV files, you often face a dilemma whether to add a
header with field names as the first line of a TSV file or not:
* a header makes a TSV more readable to humans, especially when you use tools like [Visidata](https://www.visidata.org/),
and when there is a lot of input columns (features)
* … but, on the other hand, makes it much cumbersome to process with textutils (cat, sort, shuf, etc.) or similar tools.
GEval can handle TSV with _and_ without headers. By default,
headerless TSV are assumed, but you can specify column names for
input and output/expected files with, respectively, `--in-header
in-header.tsv` and `--out-header out-header.tsv` option.
A header file (`in-header.tsv` or `out-header.tsv`) should be a one-line TSV line with column names.
(Why this way? Because now you can combine this easily with data using, for instance, `cat in-header.tsv dev-0/in.tsv`.)
Now GEval will work as follows:
* when reading a file it will first check whether the first field in
the first line is the same as the first column name, if it is the
case, it will assume the given TSV file contains a header line (just make sure
this string is specific enough and won't mix up with data!),
* otherwise, it will assume it is a headerless file,
* anyway, the column names will be used for human-readable output, for
instance, when listing worst features.
## Preparing a Gonito challenge
### Directory structure of a Gonito challenge
A definition of a [Gonito](https://gonito.net) challenge should be put in a separate
directory. Such a directory should
have the following structure:
* `README.md` — description of a challenge in Markdown, the first header
will be used as the challenge title, the first paragraph — as its short
description
* `config.txt` — simple configuration file with options the same as
the ones accepted by `geval` binary (see below), usually just a
metric is specified here (e.g. `--metric BLEU`), also non-default
file names could be given here (e.g. `--test-name test-B` for a
non-standard test subdirectory)
* `in-header.tsv` — one-line TSV file with column names for input data (features),
* `out-header.tsv` — one-line TSV file with column names for output/expected data, usually just one label,
* `train/` — subdirectory with training data (if training data are
supplied for a given Gonito challenge at all)
* `train/in.tsv` — the input data for the training set
* `train/expected.tsv` — the target values
* `dev-0/` — subdirectory with a development set (a sample test set,
which won't be used for the final evaluation)
* `dev-0/in.tsv` — input data
* `dev-0/expected.tsv` — values to be guessed
* `dev-1/`, `dev-2`, ... — other dev sets (if supplied)
* `test-A/` — subdirectory with the test set
* `test-A/in.tsv` — test input (the same format as `dev-0/in.tsv`)
* `test-A/expected.tsv` — values to be guessed (the same format as
`dev-0/expected.tsv`), note that this file should be “hidden” by the
organisers of a Gonito challenge, see notes on the structure of
commits below
* `test-B`, `test-C`, ... — other alternative test sets (if supplied)
### Initiating a Gonito challenge with geval
You can use `geval` to initiate a [Gonito](https://gonito.net) challenge:
geval --init --expected-directory my-challenge --metric RMSE
(This will generate a sample toy challenge about guessing planet masses).
Of course, any other metric can
be given to generate another type of toy challenge:
geval --init --expected-directory my-machine-translation-challenge --metric BLEU
### Preparing a Git repository
[Gonito](https://gonito.net) platform expects a Git repository with a
challenge to be submitted. The suggested way to do this will be
presented as a [Makefile](https://en.wikipedia.org/wiki/Makefile), but
of course you could use any other scripting language and the commands
should be clear if you know Bash and some basic facts about Makefiles:
* a Makefile consists of rules, each rule specifies how to build a _target_ out of _dependencies_ using
shell commands
* `$@` is the (first) target, whereas `$<` — the first dependency
* the indentation should be done with TABs, not spaces!
```
SHELL=/bin/bash
# no not delete intermediate files
.SECONDARY:
# the directory where the challenge will be created
output_directory=...
# let's define which files are necessary, other files will be created if needed;
# we'll compress the input files with xz and leave `expected.tsv` files uncompressed
# (but you could decide otherwise)
all: $(output_directory)/train/in.tsv.xz $(output_directory)/train/expected.tsv \
$(output_directory)/dev-0/in.tsv.xz $(output_directory)/dev-0/expected.tsv \
$(output_directory)/test-A/in.tsv.xz $(output_directory)/test-A/expected.tsv \
$(output_directory)/README.md \
$(output_directory)/in-header.tsv \
$(output_directory)/out-header.tsv
# always validate the challenge
geval --validate --expected-directory $(output_directory)
# we need to replace the default README.md, we assume that it
# is kept as challenge-readme.md in the repo with this Makefile;
# note that the title from README.md will be taken as the title of the challenge
# and the first paragraph — as a short description
$(output_directory)/README.md: challenge-readme.md $(output_directory)/config.txt
cp $< $@
# prepare header files (see above section on headers)
$(output_directory)/in-header.tsv: in-header.tsv $(output_directory)/config.txt
cp $< $@
$(output_directory)/out-header.tsv: out-header.tsv $(output_directory)/config.txt
cp $< $@
$(output_directory)/config.txt:
mkdir -p $(output_directory)
geval --init --expected-directory $(output_directory) --metric MAIN_METRIC --metric AUXILIARY_METRIC --precision N --gonito-host https://some.gonito.host.net
# `geval --init` will generate a toy challenge for a given metric(s)
# ... but we remove the `in/expected.tsv` files just in case
# (we will overwrite this with our data anyway)
rm -f $(output_directory)/{train,dev-0,test-A}/{in,expected}.tsv
rm $(output_directory)/{README.md,in-header.tsv,out-header.tsv}
# a "total" TSV containing all the data, we'll split it later
all-data.tsv.xz: prepare.py some-other-files
# the data are generated using your script, let's say prepare.py and
# some other files (of course, it depends on your task);
# the file will be compressed with xz
./prepare.py some-other-files | xz > $@
# and now the challenge files, note that they will depend on config.txt so that
# the challenge skeleton is generated first
# The best way to split data into train, dev-0 and test-A set is to do it in a random,
# but _stable_ manner, the set into which an item is assigned should depend on the MD5 sum
# of some field in the input data (a field unlikely to change). Let's assume
# that you created a script `filter.py` that takes as an argument a regular expression that will be applied
# to the MD5 sum (written in the hexadecimal format).
$(output_directory)/train/in.tsv.xz $(output_directory)/train/expected.tsv: all-data.tsv.xz filter.py $(output_directory)/config.txt
# 1. xzcat for decompression
# 2. ./filter.py will select 14/16=7/8 of items in a stable random manner
# 3. tee >(...) is Bash magic to fork the ouptut into two streams
# 4. cut will select the columns
# 5. xz will compress it back
xzcat $< | ./filter.py '[0-9abcd]$' | tee >(cut -f 1 > $(output_directory)/train/expected.tsv) | cut -f 2- | xz > $(output_directory)/train/in.tsv.xz
$(output_directory)/dev-0/in.tsv.xz $(output_directory)/dev-0/expected.tsv: all-data.tsv.xz filter.py $(output_directory)/config.txt
# 1/16 of items goes to dev-0 set
xzcat $< | ./filter.py 'e$' | tee >(cut -f 1 > $(output_directory)/dev-0/expected.tsv) | cut -f 2- | xz > $(output_directory)/dev-0/in.tsv.xz
$(output_directory)/test-A/in.tsv.xz $(output_directory)/test-A/expected.tsv: all-data.tsv.xz filter.py $(output_directory)/config.txt
# (other) 1/16 of items goes to test-A set
xzcat $< | ./filter.py 'f$' | tee >(cut -f 1 > $(output_directory)/test-A/expected.tsv) | cut -f 2- | xz > $(output_directory)/test-A/in.tsv.xz
# wiping out the challenge, if you are desperate
clean:
rm -rf $(output_directory)
```
Now let's do the git stuff, we will:
1. prepare a branch (say `master`) with all the files _without_
`test-A/expected.tsv`, this branch will be cloned by people taking
up the challenge.
2. prepare a separate branch (or could be a repo, we'll use the branch `dont-peek`) with
`test-A/expected.tsv` added; this branch should be accessible by
Gonito platform, but should be kept “hidden” for regular users (or
at least they should be kindly asked not to peek there).
Branch (1) should be the parent of the branch (2), for instance, the
repo (for the toy “planets” challenge) could be created as follows:
cd planets # output_directory in the Makefile above
git init
git add .gitignore config.txt README.md {train,dev-0}/{in.tsv.xz,expected.tsv} test-A/in.tsv.xz in-header.tsv out-header.tsv
git commit -m 'init challenge'
git remote add origin ssh://gitolite@gonito.net/planets # some repo you have access
git push origin master
git checkout -b dont-peek
git add test-A/expected.tsv
git commit -m 'hidden data'
git push origin dont-peek
## Taking up a Gonito challenge
Clone the repo with a challenge, as given on the [Gonito](https://gonito.net) web-site, e.g.
for the toy “planets” challenge (as generated with `geval --init`):
git clone git://gonito.net/planets
Now use the train data and whatever machine learning tools you like to
guess the values for the dev set and the test set, put them,
respectively, as:
* `dev-0/out.tsv`
* `test-A/out.tsv`
(These files must have exactly the same number of lines as,
respectively, `dev-0/in.tsv` and `test-0/in.tsv`. They should contain
only the predicted values.)
Check the result for the dev set with `geval`:
geval --test-name dev-0
(the current directory is assumed for `--out-directory` and `--expected-directory`).
If you'd like and if you have access to the test set results, you can
“cheat” and check the results for the test set:
cd ..
git clone git://gonito.net/planets planets-secret --branch dont-peek
cd planets
geval --expected-directory ../planets-secret
### Uploading your results to Gonito platform
Uploading is via Git — commit your “out” files and push the commit to
your own repo. On [Gonito](https://gonito.net) you are encouraged to share your code, so
be nice and commit also your source codes.
git remote add mine git@github.com/johnsmith/planets-johnsmith
git add {dev-0,test-A}/out.tsv
git add Makefile magic-bullet.py ... # whatever scripts/source codes you have
git commit -m 'my solution to the challenge'
git push mine master
Then let Gonito pull them and evaluate your results, either manually clicking
"submit" at the Gonito website or using `--submit` option (see below).
### Submitting a solution to a Gonito platform with GEval
A solution to a machine learning challenge can be submitted with the
special `--submit` option:
geval --submit --gonito-host HOST --token TOKEN
where:
* _HOST_ is the name of the host with a Gonito platform
* _TOKEN_ is a special per-user authorization token (can be copied
from "your account" page)
_HOST_ must be given when `--submit` is used (unless the creator of the challenge
put `--gonito-host` option in the `config.txt` file, note that in such a case using
`--gonito-host` option will result in an error).
If _TOKEN_ was not given, GEval attempts to read it from the `.token`
file, and if the `.token` file does not exist, the user is asked to
type it (and then the token is cached in `.token` file).
GEval with `--submit` does not commit or push changes, this needs to
be done before running `geval --submit`. On the other hand, GEval will
check whether the changes were committed and pushed.
Note that using `--submit` option for the main instance at
is usually **NOT** needed, as the git
repositories are configured there in such a way that an evaluation is
triggered with each push anyway.
## `geval` options
```
geval - stand-alone evaluation tool for tests in Gonito platform
Usage: geval ([--init] | [-v|--version] | [-l|--line-by-line] |
[-w|--worst-features] | [-d|--diff OTHER-OUT] |
[-m|--most-worsening-features ARG] | [-j|--just-tokenize] |
[-S|--submit]) ([-s|--sort] | [-r|--reverse-sort])
[--out-directory OUT-DIRECTORY]
[--expected-directory EXPECTED-DIRECTORY] [-t|--test-name NAME]
[-o|--out-file OUT] [-e|--expected-file EXPECTED]
[-i|--input-file INPUT] [-a|--alt-metric METRIC]
[-m|--metric METRIC] [-p|--precision NUMBER-OF-FRACTIONAL-DIGITS]
[-T|--tokenizer TOKENIZER] [--gonito-host GONITO_HOST]
[--token TOKEN]
Run evaluation for tests in Gonito platform
Available options:
-h,--help Show this help text
--init Init a sample Gonito challenge rather than run an
evaluation
-v,--version Print GEval version
-l,--line-by-line Give scores for each line rather than the whole test
set
-w,--worst-features Print a ranking of worst features, i.e. features that
worsen the score significantly. Features are sorted
using p-value for the Mann-Whitney U test comparing the
items with a given feature and without it. For each
feature the number of occurrences, average score and
p-value is given.
-d,--diff OTHER-OUT Compare results of evaluations (line by line) for two
outputs.
-m,--most-worsening-features ARG
Print a ranking of the "most worsening" features,
i.e. features that worsen the score the most when
comparing outputs from two systems.
-j,--just-tokenize Just tokenise standard input and print out the tokens
(separated by spaces) on the standard output. rather
than do any evaluation. The --tokenizer option must
be given.
-S,--submit Submit current solution for evaluation to an external
Gonito instance specified with --gonito-host option.
Optionally, specify --token.
-s,--sort When in line-by-line or diff mode, sort the results
from the worst to the best
-r,--reverse-sort When in line-by-line or diff mode, sort the results
from the best to the worst
--out-directory OUT-DIRECTORY
Directory with test results to be
evaluated (default: ".")
--expected-directory EXPECTED-DIRECTORY
Directory with expected test results (the same as
OUT-DIRECTORY, if not given)
-t,--test-name NAME Test name (i.e. subdirectory with results or expected
results) (default: "test-A")
-o,--out-file OUT The name of the file to be
evaluated (default: "out.tsv")
-e,--expected-file EXPECTED
The name of the file with expected
results (default: "expected.tsv")
-i,--input-file INPUT The name of the file with the input (applicable only
for some metrics) (default: "in.tsv")
-a,--alt-metric METRIC Alternative metric (overrides --metric option)
-m,--metric METRIC Metric to be used - RMSE, MSE, Accuracy, LogLoss,
Likelihood, F-measure (specify as F1, F2, F0.25,
etc.), multi-label F-measure (specify as
MultiLabel-F1, MultiLabel-F2, MultiLabel-F0.25,
etc.), MAP, BLEU, NMI, ClippEU, LogLossHashed,
LikelihoodHashed, BIO-F1, BIO-F1-Labels or CharMatch
-p,--precision NUMBER-OF-FRACTIONAL-DIGITS
Arithmetic precision, i.e. the number of fractional
digits to be shown
-T,--tokenizer TOKENIZER Tokenizer on expected and actual output before
running evaluation (makes sense mostly for metrics
such BLEU), minimalistic, 13a and v14 tokenizers are
implemented so far. Will be also used for tokenizing
text into features when in --worst-features and
--most-worsening-features modes.
--gonito-host GONITO_HOST
Submit ONLY: Gonito instance location.
--token TOKEN Submit ONLY: Token for authorization with Gonito
instance.
```
If you need another metric, let me know, or do it yourself!
## License
Apache License 2.0
## Authors
* Filip Graliński
## Contributors
* Piotr Halama
* Karol Kaczmarek
## Copyright
2015-2019 Filip Graliński
2019 Applica.ai
## References
Filip Graliński, Anna Wróblewska, Tomasz Stanisławek, Kamil Grabowski, Tomasz Górecki, [_GEval: Tool for Debugging NLP Datasets and Models_](https://www.aclweb.org/anthology/W19-4826/)
@inproceedings{gralinski-etal-2019-geval,
title = "{GE}val: Tool for Debugging {NLP} Datasets and Models",
author = "Grali{\'n}ski, Filip and
Wr{\'o}blewska, Anna and
Stanis{\l}awek, Tomasz and
Grabowski, Kamil and
G{\'o}recki, Tomasz",
booktitle = "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4826",
pages = "254--262",
abstract = "This paper presents a simple but general and effective method to debug the output of machine learning (ML) supervised models, including neural networks. The algorithm looks for features that lower the evaluation metric in such a way that it cannot be ascribed to chance (as measured by their p-values). Using this method {--} implemented as MLEval tool {--} you can find: (1) anomalies in test sets, (2) issues in preprocessing, (3) problems in the ML model itself. It can give you an insight into what can be improved in the datasets and/or the model. The same method can be used to compare ML models or different versions of the same model. We present the tool, the theory behind it and use cases for text-based models of various types.",
}