Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version next

...

TicketDescription
TD-36332Data grid can display wrong results if a sample is collected and dataset is unioned.
TD-36192Canceling a step in recipe panel can result in column menus disappearing in the data grid.
TD-36011User can import modified exports or exports from a different version, which do not work.
TD-35916Cannot logout via SSO
TD-35899A deployment user can see all deployments in the instance.
TD-35780Upgrade: Duplicate metadata in separate publications causes DB migration failure.
TD-35746/v4/importedDatasets GET method is failing.
TD-35644Extractpatterns with "HTTP Query strings" option doesn't work
TD-35504Cancel job throws 405 status code error. Clicking Yes repeatedly pops up Cancel Job dialog.
TD-35481After upgrade, recipe is malformed at splitrows step.
TD-35177Login screen pops up repeatedly when access permission is denied for a connection.
TD-34822

Case-sensitive variations in date range values are not matched when creating a dataset with parameters.

Info

NOTE: Date range parameters are now case-insensitive.

TD-33428

Job execution on recipe with high limit in split transformation due to Java Null Pointer Error during profiling.

Info

NOTE:  Avoid  Avoid creating datasets that are wider than 1000 2500 columns. Performance can degrade significantly on even a much more narrow dataset. You should limit yourself to under 500 columns in your datasetvery wide datasets.

TD-31327Unable to save dataset sourced from multi-line custom SQL on dataset with parameters.
TD-31252Assigning a target schema through the Column Browser does not refresh the page.
TD-31165Job results are incorrect when a sample is collected and then the last transform step is undone.
TD-30979Transformation job on wide dataset fails on Spark 2.2 and earlier due to exceeding Java JVM limit. For details, see  https://issues.apache.org/jira/browse/SPARK-18016.
TD-30857

Matching file path patterns in a large directory can be very slow, especially if using multiple patterns in a single dataset with parameters.

Info

NOTE: To increase matching speed, avoid wildcards in top-level directories and be as specific as possible with your wildcards and patterns.

TD-30854When creating a new dataset from the Export Results window from a CSV dataset with Snappy compression, the resulting dataset is empty when loaded in the Transformer page.
TD-30820

Some string comparison functions process leading spaces differently when executed on the

D s photon
or the Spark running environment.

TD-30717No validation is performed for Redshift or SQL DW connections or permissions prior to job execution. Jobs are queued and then fail.
TD-27860When the platform is restarted or an HA failover state is reached, any running jobs are stuck forever In Progress.

...