Release 6.8.1

February 7, 2020

This release enables some new features and makes some relational connections generally available. 

What's New

Install:

Import:

LDAP:

Cluster Clean:

Documentation:

Changes to System Behavior

:

NOTE: In a future release, the will be deprecated. Please switch to a supported version of Google Chrome or Mozilla Firefox. Support for Edge Chromium is expected in a future release. See Desktop Requirements.

CLI and v3 endpoints (Release 6.4):

NOTE: Do not attempt to connect to the using any version of the CLI or the v3 endpoints. They are no longer supported and unlikely to work.

In Release 6.4:

General availability:

Key Bug Fixes

TicketDescription
TD-45492

Publishing to Databricks Tables fails on ADLS Gen1 in user mode.

New Known Issues

TicketDescription
TD-47263

Importing an exported flow that references a Google Sheets or Excel source breaks connection to input source.

Workaround: If the importing user has access to the source, the user can re-import the dataset and then swap the source for the broken recipe.


Release 6.8

December 6, 2019

Welcome to Release 6.8 of . This release introduces several key features around operationalizing the platform across the enterprise. Enterprise stakeholders can now receive email notifications when recurring jobs have succeeded or failed, updating data consumers outside of the platform. This release also introduces a generalized webhook interface, which facilitates push notifications to applications such as Slack when jobs have completed. When jobs fail, users can download a much richer support bundle containing configuration files, script files, and a specified set of log files. 

Macros have been expanded to now be export- and import-ready across environments. In support of this feature, the Wrangle Exchange is now available through the , where you can download macros created by others and import them for your own use. Like macros, you can now export and import flows across product editions and release (Release 6.8 or later only). 

In the application, you can now use shortcut keys to navigate around the workspace and the Transformer page. And support for the Firefox browser has arrived. Read on for more goodness added with this release.

What's New

Install:

Workspace:

Browser:

Project Management:

Operationalization:

Supportability:

Connectivity:


Import:

Transformer Page:

Job execution:


Language:

API:

Changes in System Behavior

Browser Support Policy:

Install:

NOTE: In the next release of after Release 6.8, support for installation on CentOS/RHEL 6.x and Ubuntu 14.04 will be deprecated. You should upgrade the to a supported version of CentOS/RHEL 7.x or Ubuntu 16.04. Before performing the upgrade, please perform a full backup of the and its databases. See Backup and Recovery.

Import/Export:

CLI and v3 endpoints (Release 6.4):

NOTE: Do not attempt to connect to the using any version of the CLI or the v3 endpoints. They are no longer supported and unlikely to work.

In Release 6.4:

Key Bug Fixes

TicketDescription
TD-40348

When loading a recipe in an imported flow that references an imported Excel dataset, Transformer page displays Input validation failed: (Cannot read property 'filter' of undefined) error, and the screen is blank.

TD-42080Cannot run flow or deployment that contains more than 10 recipe jobs


New Known Issues

TicketDescription
TD-46123

Cannot modify the type of relational target for publishing action.

Workaround: Create a new publishing action with the desired relational target. Remove the original one if necessary. See Run Job Page.

TD-45923Publishing a compressed Snappy file to SFTP fails.
TD-45922You cannot publish TDE format to SFTP destinations.
TD-45492

Publishing to Databricks Tables fails on ADLS Gen1 in user mode.

TD-45273

Artifact Storage Service fails to start on HDP 3.1.

Workaround: The Artifact Storage Service can reference the HDP 2.6 Hadoop bundle JAR.

  1. Locate the following property:

    "artifact-storage-service.classpath"
  2. Replace this value:

    :%(topOfTree)s/%(hadoopBundleJar)s
  3. With the following:

    :%(topOfTree)s/conf/hadoop-site/:%(topOfTree)s/hadoop-deps/hdp-2.6/build/libs/hdp-2.6-bundle.jar
  4. Save changes and restart the platform.
TD-45122

API: re-running job using only the wrangleDataset identifier fails even if the original job succeeds when writeSettings were specified.

Workaround: Use a full jobGroups job specification each time that you run a job.

See API JobGroups Create v4.

TD-44429

Cannot publish outputs to relational targets, receiving Encountered error while processing stream.

Workaround: This issue may be caused by the trifacta service account not having write and execute permissions to the /tmp directory on the . If so, you can do either of the following:

  1. Enable write and execute permissions for the account on /tmp.
  2. Create a new temporary account and provide the service account write and execute permissions to it. Then, add the following to data-service.jvmOptions:

    -Dorg.xerial.snappy.tempdir=/new/directory/with/writeexecuteaccess
TD-44427

Cannot publish dataset containing duplicate rows to Teradata. Error message:

Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.10.00.14] [Error -2802] [SQLState 23000] Duplicate row error in abc_trifacta.tmp_218768523.
 at 

Workaround: This is a known limitation on Teradata. For more information on this limitation, see Enable Teradata Connections.