Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


TD-40348Transformer Page

When loading a recipe in imported flow that references an imported Excel dataset, Transformer page displays Input validation failed: (Cannot read property 'filter' of undefined) error, and the screen is blank. 


Workaround: In Flow View, select an output object, and run a job. Then, load the recipe in the Transformer page and generate a new sample. For more information, see Import Flow.


After installing on Ubuntu 16.04 (Xenial), platform may fail to start with "ImportError: No module named pkg_resources" error.


Workaround: Verify installation of python-setuptools package. Install if missing.

TD-35644Compilation/ExecutionExtractpatterns for "HTTP Query strings" option doesn't work.

When executing Spark 2.3.0 jobs on S3-based datasets, jobs may fail due to a known incompatibility between HTTPClient:4.5.x and aws-java-jdk:1.10.xx. For details, see


Workaround: Use Spark 2.1.0 instead.In Admin Settings page, configure the spark.version property to 2.1.0. For more information, see Admin Settings Page.

For additional details on Spark versions, see Configure for Spark.


Clicking Cancel Job button generates a 405 status code error. Click Yes button fails to close the dialog.


Workaround: After you have clicked the Yes button once, you can click the No button. The job is removed from the page.


Spark jobs fail on LCM function that uses negative numbers as inputs.


Workaround: If you wrap the negative number input in the ABS function, the LCM function may be computed. You may have to manually check if a negative value for the LCM output is applicable.


Differences in how WEEKNUM function is calculated in Photon and Spark running environments, due to the underlying frameworks on which the environments are created.

  • Photon week 1 of the year: The week that contains January 1.
  • Spark week 1 of the year: The week that contains at least four days in the specified year.

For more information, see WEEKNUM Function.

The Spark running environment does not support use of multi-character delimiters for CSV outputs. For more information on this issue, see

Workaround: You can switch your job to a different running environment or use single-character delimiters.

TD-34840Transformer PagePlatform fails to provide suggestions for transformations when selecting keys from an object with many of them.
TD-34119Compilation/ExecutionWASB job fails when publishing two successive appends.
TD-30855 Publish

Creating dataset from Parquet-only output results in "Dataset creation failed" error.


NOTE: If you generate results in Parquet format only, you cannot create a dataset from it, even if the Create button is present.


You cannot publish ad-hoc results for a job when another publishing job is in progress for the same job.


Workaround: Please wait until the previous job has been published before retrying to publish the failing job.


For multi-file imports lacking a newline in the final record of a file, this final record may be merged with the first one in the next file and then dropped in the Photon running environment.


Workaround: Verify that you have inserted a new line at the end of every file-based source.