Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0711


For more information, see Configure for Spark.

Key Bug Fixes

TD-53062After upgrade, imported recipe has UDF steps converted to comments.

On Azure Databricks, creating a stratified sample fails.


Cannot run Azure Databricks jobs on ADLS-Gen1 cluster in user mode.


UnknownHostException error when generating Azure Databricks access token from Secure Token Service


Cannot import some Parquet files into the platform.


Import data page is taking too long to load.


Closing the connections search bar removes search bar and loses sort order.


On upgrade, Spark is incorrectly parsing files of type "UTF-8 Unicode (with BOM)."


Import rules not working for remapping of WASB bucket name. For more information, see Define Import Mapping Rules.


Cannot import flow due to missing associated flownode error.


Server Save error when deleting a column.


Transformation engine unavailable due to prior crash


After upgrade, you cannot edit recipes or run jobs on recipes that contain the optional replaceOn parameter is not used in Replace transformation.


Optional file cleanup generates confusing error logging when it fails.


When modifying file privileges, the platform makes assumptions about database usernames.


On upgrade, the migration framework for the authorization service is too brittle for use with Amazon RDS database installations.


When flows are imported into the Deployment Manager, additional characters are inserted into parameterized output paths, causing job failures.


PostgreSQL connections may experience out of memory errors due to incorrectly specified fetch size and vendor configuration.


Can't import a flow that contains a reference in a flow webhook task to a deleted output.


Generic Hadoop folder is missing in hadoop-deps folder.


After upgrade, you cannot publish as a single-file to WASB to replace an existing output destination.


After upgrade, users cannot load recipes due to Requested Data Not Found error when loading samples.


After upgrading Cloudera cluster to version 6.3.3, you cannot run jobs due to the following error:

Code Block
class not found exception: java.lang.NoClassDefFoundError: org/apache/spark/sql/execution/datasources/csv/CSVOptions

Please see "Cloudera support" above.


During upgrade, cross-migration fails for authorization service and its database with the following error:

Code Block
Cross migration failed. Make sure the authorization DB is reset.

After upgrade, ad-hoc publish to Hive fails.


After upgrade, you cannot unzip downloaded log files.


After upgrade, cross-migration validation fails for "groupsPolicies."


Tripache Vulnerabilities - CVE-2020-1927

New Known Issues



Release 7.1

May 4, 2020

What's New


  • Support for installation on CentOS/RHEL 8. See System Requirements.


    NOTE: SSO using SAML is not supported on CentOS/RHEL 8. See Configure SSO for SAML.


    NOTE: Support for CentOS/RHEL 6 has been deprecated. Please upgrade to CentOS/RHEL 8.

  • Support for installation on CentOS/RHEL 7.7. See System Requirements.


  • Improved performance for Oracle, SQL Server, and DB2 connections. These performance improvements will be applied to other relational connections in future releases.


    NOTE: For more information on enabling this feature, please contact

    D s proserv

  • Azure Databricks Tables:
    • Support for read/write on Delta tables.
    • Support for read/write on external tables.
    • Support for read from partitioned tables.
    • See Using Databricks Tables.


      NOTE: To enable these additional read/write capabilities through Databricks Tables, the underlying connection was changed to use a Simba driver. In your connection definition, any Connect String Options that relied on the old Hive driver may not work. For more information, see Configure for Azure Databricks.


  • Introducing plans. A plan is a sequence of tasks on one or more flows that can be scheduled.


    NOTE: In this release, the only type of task that is supported is Run Flow.



  • Improved performance when loading the Transformer page and when navigating between the Flow View and Transformer pages.
  • Join steps are now created in a larger window for more workspace. See Join Window.
  • New column selection UI simplifies choosing columns in your transformations. See Transform Builder
  • Faster and improved method of surfacing transform suggestions based on machine learning.

Job Execution:


NOTE: Azure Databricks 5.3 and 5.4 are no longer supported. Please upgrade to Azure Databricks 5.5 LTS or 6.x. See End of Life and Deprecated Features.


  • All MODE functions return the lowest value in a set of values if there is a tie in the evaluation. See Changes to the Language.

Key Bug Fixes


By default, under SSO manual logout and session expiration logout redirect to different pages. Manual logout directs you to SAML sign out, and session expiry produces a session expired page.

To redirect the user to a different URL on session expiry, an administrator can set the following parameter: webapp.session.redirectUriOnExpiry. This parameter applies to the following SSO environments:

New Known Issues


You cannot update your AWS configuration for per-user or per-workspace mode via UI.


Workaround: You can switch to using AWS system mode with a single, system wide configuration, or you can use the APIs to make changes.See API Workflow - Manage AWS Configurations.


Cannot select and apply custom data types through column Type menu.


Workaround: You can change the type of the column as a recipe step. Use the Change column type transformation. From the New type drop-down, select Custom. Then, enter the name of the type for the Custom type value.

TD-47784When creating custom datasets using SQL from Teradata sources, the ORDER BY clause in standard SQL does not work.

Uploaded files (CSV, XLS, PDF) that contain a space in the filename fail to be converted.


Workaround: Remove the space in the filename and upload again.