Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0711

...

For more information, see Configure for Spark.

Key Bug Fixes

TicketDescription
TD-53062After upgrade, imported recipe has UDF steps converted to comments.
TD-52738

On Azure Databricks, creating a stratified sample fails.

TD-52686

Cannot run Azure Databricks jobs on ADLS-Gen1 cluster in user mode.

TD-52614

UnknownHostException error when generating Azure Databricks access token from Secure Token Service

TD-51903

Cannot import some Parquet files into the platform.

TD-51681

Import data page is taking too long to load.

TD-51537

Closing the connections search bar removes search bar and loses sort order.

TD-51306

On upgrade, Spark is incorrectly parsing files of type "UTF-8 Unicode (with BOM)."

TD-51218

Import rules not working for remapping of WASB bucket name. For more information, see Define Import Mapping Rules.

TD-51166

Cannot import flow due to missing associated flownode error.

TD-50945

Server Save error when deleting a column.

TD-50906

Transformation engine unavailable due to prior crash

TD-50791

After upgrade, you cannot edit recipes or run jobs on recipes that contain the optional replaceOn parameter is not used in Replace transformation.

TD-50703

Optional file cleanup generates confusing error logging when it fails.

TD-50642

When modifying file privileges, the platform makes assumptions about database usernames.

TD-50530

On upgrade, the migration framework for the authorization service is too brittle for use with Amazon RDS database installations.

TD-50525

When flows are imported into the Deployment Manager, additional characters are inserted into parameterized output paths, causing job failures.

TD-50522

PostgreSQL connections may experience out of memory errors due to incorrectly specified fetch size and vendor configuration.

TD-50516

Can't import a flow that contains a reference in a flow webhook task to a deleted output.

TD-50508

Generic Hadoop folder is missing in hadoop-deps folder.

TD-50496

After upgrade, you cannot publish as a single-file to WASB to replace an existing output destination.

TD-50495

After upgrade, users cannot load recipes due to Requested Data Not Found error when loading samples.

TD-50466

After upgrading Cloudera cluster to version 6.3.3, you cannot run jobs due to the following error:

Code Block
class not found exception: java.lang.NoClassDefFoundError: org/apache/spark/sql/execution/datasources/csv/CSVOptions

Please see "Cloudera support" above.

TD-50446

During upgrade, cross-migration fails for authorization service and its database with the following error:

Code Block
Cross migration failed. Make sure the authorization DB is reset.
TD-50164

After upgrade, ad-hoc publish to Hive fails.

TD-49991

After upgrade, you cannot unzip downloaded log files.

TD-49973

After upgrade, cross-migration validation fails for "groupsPolicies."

TD-49692

Tripache Vulnerabilities - CVE-2020-1927

New Known Issues

...

None.

Release 7.1

May 4, 2020

What's New

...

  • Support for installation on CentOS/RHEL 8. See System Requirements.

    Info

    NOTE: SSO using SAML is not supported on CentOS/RHEL 8. See Configure SSO for SAML.

    Info

    NOTE: Support for CentOS/RHEL 6 has been deprecated. Please upgrade to CentOS/RHEL 8.

  • Support for installation on CentOS/RHEL 7.7. See System Requirements.

...

  • Improved performance for Oracle, SQL Server, and DB2 connections. These performance improvements will be applied to other relational connections in future releases.

    Info

    NOTE: For more information on enabling this feature, please contact

    D s proserv
    .

  • Azure Databricks Tables:
    • Support for read/write on Delta tables.
    • Support for read/write on external tables.
    • Support for read from partitioned tables.
    • See Using Databricks Tables.

      Info

      NOTE: To enable these additional read/write capabilities through Databricks Tables, the underlying connection was changed to use a Simba driver. In your connection definition, any Connect String Options that relied on the old Hive driver may not work. For more information, see Configure for Azure Databricks.

...

  • Introducing plans. A plan is a sequence of tasks on one or more flows that can be scheduled.

    Info

    NOTE: In this release, the only type of task that is supported is Run Flow.

     

...

  • Improved performance when loading the Transformer page and when navigating between the Flow View and Transformer pages.
  • Join steps are now created in a larger window for more workspace. See Join Window.
  • New column selection UI simplifies choosing columns in your transformations. See Transform Builder
  • Faster and improved method of surfacing transform suggestions based on machine learning.

Job Execution:

Info

NOTE: Azure Databricks 5.3 and 5.4 are no longer supported. Please upgrade to Azure Databricks 5.5 LTS or 6.x. See End of Life and Deprecated Features.

...

  • All MODE functions return the lowest value in a set of values if there is a tie in the evaluation. See Changes to the Language.

Key Bug Fixes

TicketDescription
TD-48245

By default, under SSO manual logout and session expiration logout redirect to different pages. Manual logout directs you to SAML sign out, and session expiry produces a session expired page.

To redirect the user to a different URL on session expiry, an administrator can set the following parameter: webapp.session.redirectUriOnExpiry. This parameter applies to the following SSO environments:


New Known Issues

TicketDescription
TD-52221

You cannot update your AWS configuration for per-user or per-workspace mode via UI.

Tip

Workaround: You can switch to using AWS system mode with a single, system wide configuration, or you can use the APIs to make changes.See API Workflow - Manage AWS Configurations.

TD-49559

Cannot select and apply custom data types through column Type menu.

Tip

Workaround: You can change the type of the column as a recipe step. Use the Change column type transformation. From the New type drop-down, select Custom. Then, enter the name of the type for the Custom type value.

TD-47784When creating custom datasets using SQL from Teradata sources, the ORDER BY clause in standard SQL does not work.
TD-47473

Uploaded files (CSV, XLS, PDF) that contain a space in the filename fail to be converted.

Tip

Workaround: Remove the space in the filename and upload again.