Commit Graph

86 Commits

Author SHA1 Message Date
210929751d const to the getter for total occurences count 2019-01-16 13:15:30 +01:00
ec621fb310 working full search 2019-01-09 18:31:52 +01:00
5a7cbbe9e9 full search stub - tests needed 2019-01-09 15:30:56 +01:00
53b100b2e4 lowercasing bad utf 2018-12-13 17:43:01 +01:00
2eda92fe7a interval contains 2018-12-12 21:45:07 +01:00
bd4ff81e32 ensuring UTF-8 strings 2017-10-15 18:54:15 +02:00
61631c52a3 lexicon search 2017-10-10 15:39:47 +02:00
5e809efcce corrected tokenizer 2017-05-05 12:58:32 +02:00
96a5bc3108 original sentence in tokenized sentence 2017-04-28 13:48:32 +02:00
dceb0d9f47 date recognition 2017-04-27 10:37:29 +02:00
bd73749388 new tokenizer 2017-04-26 17:02:18 +02:00
a0673df75a cpplint corrections 2017-04-22 23:47:48 +02:00
970dda5dc2 option of white space tokenization while searching 2017-04-22 23:45:51 +02:00
31e4f091ad mutliple results 2017-04-21 14:51:58 +02:00
c3826919ba changes in CMakeLists.txt 2017-03-03 11:28:54 +01:00
7e005bfca7 changed significance factor to 2 2016-10-22 18:02:04 +02:00
8bc739ff20 added boundary on simple search results 2016-01-25 22:42:42 +01:00
b3d7c993aa tokenize only option - no word map 2016-01-01 20:45:07 +01:00
bbf3853d2a added lowercasing when tokenizing by space 2015-12-29 21:44:46 +01:00
0a8d2fdd39 tokenize by whitespace option 2015-12-27 20:54:40 +01:00
873d7c300c added parameterless constructor for concordia 2015-10-19 15:38:10 +02:00
1adabf4833 add index path as required argument to concordia constructor 2015-10-16 22:14:11 +02:00
fa3138df29 count occurences feature 2015-10-01 13:36:54 +02:00
bd62420cd5 updated tutorial 2015-08-24 14:30:20 +02:00
0a3fd8a04e added an extremely important improvement to the concordia search algorithm - gapped overlays cut-off 2015-08-24 13:10:06 +02:00
209e374226 repaired concordia test 2015-08-19 20:53:40 +02:00
68fecaddf8 adding all tokenized examples 2015-08-19 20:49:26 +02:00
a765443a01 simple search returns matched pattern fragments 2015-08-07 12:54:57 +02:00
28704c2f43 separated tokenization and adding to index 2015-08-01 17:03:39 +02:00
5a57406875 finished original word positions 2015-06-27 12:40:24 +02:00
a8c5fa0c75 original word positions 2015-06-27 10:09:49 +02:00
dba70b4e24 done word positions 2015-06-26 22:50:53 +02:00
724bf0d080 new responsibilities of tokenized sentence 2015-06-26 15:38:24 +02:00
9b1735516c working sentence tokenizer 2015-06-25 20:49:22 +02:00
8432dd321f tokenizer in progress 2015-06-25 10:12:51 +02:00
0baf3e4ef2 character intervals in progress 2015-06-22 13:52:56 +02:00
07d5d4438b clear index, examples 2015-05-04 20:40:44 +02:00
abbd5b1ae8 finished documentation 2015-05-01 14:52:53 +02:00
9e550ca1cf more doc 2015-04-30 22:22:54 +02:00
87a26bfa3b cleaned configuration, doc 2015-04-30 21:15:18 +02:00
b790c6898f stable release 2015-04-30 09:29:10 +02:00
db63cf776e doc 2015-04-28 21:34:07 +02:00
bb7608d05e anubis searcher -> concordia searcher
Former-commit-id: 8afe194adf3163ee62caa30732d9c9dd095df66b
2015-04-24 11:48:32 +02:00
04df67c6f0 100% test in concordia-console. All passed!
Former-commit-id: 6e6186a148d637ba5a0d324d6d68c78708f0942d
2015-04-22 16:50:12 +02:00
f64449311d removed stop words - works slower
Former-commit-id: 97ce33b0a6ea3c89aaa5a4c69cad248c7b2c8203
2015-04-21 21:33:08 +02:00
5c2ae86097 output concordia score
Former-commit-id: fa7db09fe9319fa844d294ca4e7deb22d1328151
2015-04-21 20:44:49 +02:00
7549703414 best overlay computation
Former-commit-id: 986f3d6b611fd276a7b26073daa0094caf078d1e
2015-04-21 15:14:48 +02:00
024fbf72aa concordia search
Former-commit-id: 609c3a54e930ebae45a2e9a07f63991ec4abc9a6
2015-04-17 14:17:59 +02:00
4e02afc897 anubis search v1 - very slow for some patterns
Former-commit-id: ae327d7d24f4bc959d3749745a8c395093a17a50
2015-04-16 11:39:39 +02:00
0d4bdf12de removed using namespace std
Former-commit-id: dbb5129e1f94d83eca887ada0f89d6bb45250f1e
2015-04-15 14:14:10 +02:00