* add clarity for reinterpret docs
Helps fix#3292
* update reinterpret docs phrasing
We agreed to use "encoding" to be friendly to user exposed messaging instead of "encoder" and "decoder" that is used internally.
* fix serializeReinterpret() test json
* changed to reflect the function's acceptance of either simple string or regex
* cast p into a Pattern
* cast p into a Pattern
* Changed test to reflect the new output from function.
* Add more keyer tests
- All forms of Unicode whitespace for both fingerprint & N-gram fingerprint
- additional N-gram fingerprint cases
* Improve fingerprint keyers
- Update N-gram fingerprint keyer to match (missed last time)
- refactor string normalization to reduce redundancy between two keyers
- add C1 controls to control characters that are stripped
- include all Unicode whitespace characters in splitting delimiter
and don't strip controls which are whitespace (HT, LF, VT, FF, CR,
NEL)
- minor cleanups, simplifications, and performance optimizations
* Visually center links box
* Add the Java runtime info to the About page - fixes#3240
- Add the Java runtime name & version GetVersionCommand
- Add the returned information to the About page
* Clean up importer refactoring
Remove an extra copy of filename setting.
Revert some additional API changes (retaining both versions)
* Revert archive file name changes & mark as deprecated
* Add utility helpers to create array of comparable items
* Extend sort() to handle arrays with nulls
- Instead of NullPointerException on nulls, sort them last
- add JSON helpers to return Comparable[] in addition to Object[]
- Non-homogenous arrays or arrays with non-primitive
objects (array or object) are not sortable
- Add tests for both new and old sort functionality
* Refactor GREL Get tests
- move helper up to RefineTest
- move tests to the correct module
* Extend forEach() to support JSON objects - fixes#3149
Also add tests for existing forEach forms in addition to the new one
* Add a couple more tests
* Migrate reconciliation calls to OkHTTP, for #2903
* Migrate to Apache HTTP Commons
* Migrate data extension to Apache HTTP client
* Deprecate HttpURLConnection in RefineServlet
* Use LaxRedirectStrategy, clean up imports
* Remove read and pool timeouts, only keep the connection timeout
* Adapt mocking of HTTP calls after migration
* Fix ToDate test failure - fixes#3026
Instead of computing offset from UTC at current
point in time, use the offset from the parsed
date so that we're not affected by crossing
a daylight savings time boundary.
* Fix date parsing with locale as first format string
Also refactors for simpicity, restore some dropped tests,
and restores previous behavior of considering a bad
format string an error instead of silently ignoring it.
It does NOT address another issue which was introduced
in May 2018 of treating date/times without timzone
information as UTC instead of local.
* Restore error checking and messages
* Save & restore default timezone for tests
Also add some ToDos for places where LocalDate is being misused.
* Make sure data directory is directory, not a file
* Add a test for zip archive import
Also tests the saving of the archive file name and source filename
* Add TODOs - no functional changes
* Cosmetic cleanups
* Revert importer API changes for archive file name parameter
Fixes#2963
- restore binary compatibility to the API
- hoist the handling of both fileSource and archiveFileName from
TabularImportingParserBase and TreeImportingParserBase to
ImportingParserBase so that there's only one copy. These 3 classes are
all part of the internal implementation, so there should be no
compatibility issue.
* Revert weird flow of control for import options metadata
This reverts the very convoluted control flow that was introduced
when adding the input options to the project metadata. Instead
the metadata is all handled in the importer framework rather than
having to change APIs are have individual importers worry about
it.
The feature never had test coverage, so that is still to be added.
* Add test for import options in project metadata & fix bug
Fixes bug where same options object was being reused and overwritten,
so all copies in the list ended up the same.
* Fix text guesser so it doesn't guess wikitext
Fixes#2850
- Add simple magic detector for zip & gzip files to keep
it from attempting to guess binary files
- Add a counter for C0 controls for the same reason
- Tighten wikitable counters to require marker at
beginning of the line, per the specification
- Refactor to use Apache Commons instead of private
counting methods
- Add tests for most TextGuesser formats
* Remove misplaced duplicate test data file
* Fix LGTM warning + minor cleanups
* Use BoundedInputStream to prevent runaway lines
* Add utility functions to check/convert dates
* Add date tests and refactor to DRY up
* Fix date import - fixes#1908
Change from java.util.Date to OpenRefine 3.0+'s OffsetDateTime
Fixes#1908
* Centralize date conversion
* Moving utility methods to ParsingUtilities
* Fix tests
* Use standard text normalization - fixes#2898Fixes#2898. Fixes#409. Refs #650
Replaces homegrown ISO Latin-1 only character subsitition
with standard Java Normalize to NFD, followed by diacritic
removal and a few custom character expansions/replacements.
* Fix Mac build
* Improve compatibility with previous code
One intentional change is folding O with stroke to
oe instead of o.
- Use more powerful NFKD instead of NFD
- strip punctuation after decomposition since it can generate
new punctuation
- Add compatibility test for old asciify() method
- Add some graphically similar characters to substitution table
* Add oe character/ligature & more long S forms
* More tests for ligatures and Latin Extended
* Add Latin-1 Supplement tests
Fixes#1161
This change parallels what was done in #12571da3c00 to fix
the FingerprintKeyer and moves the diacritic removal before
the deduping. Includes a test.
* Truncate any completely empty columns on the right
Fixes#565
The current versions of Open Office create default spreadsheets
with over 1000 empty columns. Keep track of the rightmost
non-empty column when importing and truncate everything else.
Also adds a basic ODS import test.
* Fix dates in ODS spreadsheets
Fixes#2224
* Performance optimized version of ToNumber
Approximately 5x faster for floats (data dependent)
and about the same speed for integers.
- Instead of blindly trying to parse as Long, do a quick check
for obvious problems (e.g. decimal point).
- Don't trim. It's already done by called methods.
- Use valueOf() instead of parse() to avoid object creation
* Add Java Microbenchmark Harness
The shaded JAR is missing the OpenRefine classes, for a reason
that I haven't figured out, so requires openrefine-main.jar at runtime.
* Remove old implementations of ToNumber
* Remove unneeded dependencies from main project
* Clean up and reformat
Refs #2863
The tree importer sorts columns/column groups by how populated
they are, which is of arguable utility, but the tie-breaker
of ordering by shortest column name is completely silly.
This change removes that and, in conjunction with a stable sort
algorithm, will preserve the original order of the columns.
Fixes#565
The current versions of Open Office create default spreadsheets
with over 1000 empty columns. Keep track of the rightmost
non-empty column when importing and truncate everything else.
Also adds a basic ODS import test.
* Fix charset encoding & MIME type handling
Character set (ie what we call "encoding") is part of the Content-Type,
*not* the Content-Encoding, which specifies compression (e.g. gzip).
This correctly sets the character set encoding as well as cleaning
the MIME type so that additional parsing doesn't need to be done
downstream (and removes that code).
* Use "text" instead of "text/line-based" as default fallback format
The TextLineBasedGuesser only tries a limited number of
formats (CSV, TSV, fixed), so we can't get out of that hole to
find JSON, XML, etc.
Start with a more general format instead to improve our
guessing odds.
* Support content type Structured Name Syntax Suffixes (+json +xml)
If we can't find a fully specified content type in our lookup,
fall back to just the suffix (which is registered with a leading +)
Fixes#2800Fixes#2805
* Harden reconciliation - Fixes#2590
- check for non-JSON / unparseable JSON returns
- handle malformed results response with no name for candidates
- catch any Exception, not just IOExceptions
- call processManager.onFailedProcess() for cleanup on error
* Add default constructor for Jackson
Jackson complains about needing a default constructor for the
NON_DEFAULT annotation, but I'm not sure why this worked before.
* Clean up indentation and unused variable - no functional changes
Make indentation consistent throughout the module, changing recently
added lines to use the standard all spaces convention.
Remove unused count variable
* Simplify control flow
* Update limit parameter comment. No functional change.
* Replace ternary expression which is causing NPE
* Add reconciliation tests using mock HTTP server
* Fixes#486. Builds on code from Steffen Stundzig
- Switch from ICU4J to juniversalchardet
(Java port of Mozilla charset detector)
- Replace org.json code with Jackson
- Add tests
- Add TODO for multi-file character encoding mismatches
* Restore dependency lost in rebase
Co-authored-by: Steffen Stundzig <git@stundzig.de>
* Use ContentDisposition instead of ContentType to control download
Fixes#1197. Previously we were using a funky ContentType to attempt
to force a file download rather than display in browser, but this
conflicted with attempts to save UTF-8 which was outside the Basic
Multilingual Plane (BMP).
By switching to ContentDisposition: attachment, which has been
the preferred method for a number of years, we can avoid this conflict.
As part of this, switch to using the "preview" param consistently
to control preview vs download rather than the content type.
* Switch content type to text/plain
Now that we don't need to use ContentType to control download
behavior, we can use something more reasonable.
* Use mockwebserver instead of live network for tests
Fixes#2680. Fixes#1904.
* Remove use of deprecated methods
* Convert to use Apache HTTP Components client library
Fixes#1410 by virtue of redirect following being a built-in
capability of the library, along with retries with binary backoff,
built-in decompression, etc.
* Address review comments
* Fix bug in choice counts for records mode
* Add test for value grouper on records
* Refactor and comment code
* Count distinct instances of null/blank data
* Update test to check for blank data count in records
* Remove unnecessary import statement
* added options ui
* added definition for both separators
* added tests
* removed definitions from backend and added them to frontend
* added reverse order and handling for accented characters
* added tests for accented characters and reverse split
* fixed build errors
* unicode character ranges instead
* added examples
* Convert illegal characters into leagal ones.
* Test tab in key & value string
Also fix up test that depended on previous TAB
related error message and clean up logging
Co-authored-by: Tom Morris <tfmorris@gmail.com>
NOTE: Changes the public API where some of the old types were
embedded which means that any extensions that extend these
interfaces will have to be updated.
Fixes#2690.
* Save preferences JSON using UTF-8 encoding. Bulletproof prefs load.
Fixes#2543. Fixes#2627.
Always use UTF-8 to write JSON because platform default encoding
might not be legal JSON (e.g. ISO 8859-1).
Also be more conservative about keeping backups if we fail to write.
* Handle case where backup prefs is better than more recent
* Recover from corrupted prefs with null starred list.
Fixes#2544. Replaces null with an empty list.
* Run tests with non-UTF-8 encoding
Make sure that we don't depend on UTF-8 being the default encoding
because it isn't true everywhere (e.g. Windows)
* Add test for non-ASCII chars in workspace.json
This depends on the default Java encoding being something
other than UTF-8 to test properly.