...
Ticket | Description | ||
---|---|---|---|
TD-36332 | Data grid can display wrong results if a sample is collected and dataset is unioned. | ||
TD-36192 | Canceling a step in recipe panel can result in column menus disappearing in the data grid. | ||
TD-36011 | User can import modified exports or exports from a different version, which do not work. | ||
TD-35916 | Cannot logout via SSO | ||
TD-35899 | A deployment user can see all deployments in the instance. | ||
TD-35780 | Upgrade: Duplicate metadata in separate publications causes DB migration failure. | ||
TD-35746 | /v4/importedDatasets GET method is failing. | ||
TD-35644 | Extractpatterns with "HTTP Query strings" option doesn't work | ||
TD-35504 | Cancel job throws 405 status code error. Clicking Yes repeatedly pops up Cancel Job dialog. | ||
TD-35481 | After upgrade, recipe is malformed at splitrows step. | ||
TD-35177 | Login screen pops up repeatedly when access permission is denied for a connection. | ||
TD-34822 | Case-sensitive variations in date range values are not matched when creating a dataset with parameters.
| ||
TD-33428 | Job execution on recipe with high limit in split transformation due to Java Null Pointer Error during profiling.
| ||
TD-31327 | Unable to save dataset sourced from multi-line custom SQL on dataset with parameters. | ||
TD-31252 | Assigning a target schema through the Column Browser does not refresh the page. | ||
TD-31165 | Job results are incorrect when a sample is collected and then the last transform step is undone. | ||
TD-30979 | Transformation job on wide dataset fails on Spark 2.2 and earlier due to exceeding Java JVM limit. For details, see https://issues.apache.org/jira/browse/SPARK-18016. | ||
TD-30857 | Matching file path patterns in a large directory can be very slow, especially if using multiple patterns in a single dataset with parameters.
| ||
TD-30854 | When creating a new dataset from the Export Results window from a CSV dataset with Snappy compression, the resulting dataset is empty when loaded in the Transformer page. | ||
TD-30820 | Some string comparison functions process leading spaces differently when executed on the Photon or the Spark running environment. | ||
TD-30717 | No validation is performed for Redshift or SQL DW connections or permissions prior to job execution. Jobs are queued and then fail. | ||
TD-27860 | When the platform is restarted or an HA failover state is reached, any running jobs are stuck forever In Progress. |
...