When loading a recipe in imported flow that references an imported Excel dataset, Transformer page displays Input validation failed: (Cannot read property 'filter' of undefined) error, and the screen is blank.
After installing on Ubuntu 16.04 (Xenial), platform may fail to start with "ImportError: No module named pkg_resources" error.
|TD-35644||Compilation/Execution||Extractpatterns for "HTTP Query strings" option doesn't work.|
When executing Spark 2.3.0 jobs on S3-based datasets, jobs may fail due to a known incompatibility between HTTPClient:4.5.x and aws-java-jdk:1.10.xx. For details, see https://github.com/apache/incubator-druid/issues/4456.
For additional details on Spark versions, see Configure for Spark.
Clicking Cancel Job button generates a 405 status code error. Click Yes button fails to close the dialog.
Spark jobs fail on LCM function that uses negative numbers as inputs.
Differences in how WEEKNUM function is calculated in Photon and Spark running environments, due to the underlying frameworks on which the environments are created.
For more information, see WEEKNUM Function.
The Spark running environment does not support use of multi-character delimiters for CSV outputs. For more information on this issue, see https://issues.apache.org/jira/browse/SPARK-24540.
|TD-34840||Transformer Page||Platform fails to provide suggestions for transformations when selecting keys from an object with many of them.|
|TD-34119||Compilation/Execution||WASB job fails when publishing two successive appends.|
Creating dataset from Parquet-only output results in "Dataset creation failed" error.
You cannot publish ad-hoc results for a job when another publishing job is in progress for the same job.
For multi-file imports lacking a newline in the final record of a file, this final record may be merged with the first one in the next file and then dropped in the Photon running environment.