March 25, 2022
Databricks:
Support for Databricks 7.x and 8.x.
NOTE: Databricks 7.3 and Databricks 8.3 are recommended. |
Publishing:
Improvements to publishing of to Snowflake. For more information, see Improvements to the Type System.
Nginx:
None.
Ticket | Description |
---|---|
TD-69201 | Vulnerability scan detected compromised versions of log4j on the Trifacta Hadoop dependency jars |
TD-69052 | Job fails using Spark when using parameterized files as input |
TD-69004 | Patch httpd to version 2.4.52 |
TD-68085 |
|
TD-67953 | Remove log4j dependencies from Java projects |
TD-67747 | CVE-2021-44832: Apache Log4j2 vulnerable to RCE via JDBC Appender when attacker controls configuration |
TD-67677 | EMR spark job fails with error "org.apache.spark.sql.AnalysisException: Cannot resolve column name" if flow optimizations are enabled. |
TD-67640 | Intermittent failure to publish to Tableau in Fileconverter. |
TD-67572 | EMR spark job fails with error "org.apache.spark.sql.AnalysisException: Cannot resolve column name" |
TD-67558 | CVE-2021-45105: Log4j vulnerability (denial of service) |
TD-67531 | Glue jobs not working after upgrade to Release 8.2 |
TD-67455 | CVE-2021-45046: Log4j vulnerability |
TD-67410 | CVE-2021-23017: Nginx v.1.20.0 security vulnerability |
TD-67388 | Nest function failing |
TD-67372 | Patch/update Log4J (RCE 0-day exploit found in log4j) |
TD-67329 | Publish failing with "java.io.IOException: No FileSystem for scheme: sftp" |
TD-66779 | Output home directory is not picked correctly for job runs in wasb/adls-gen2 |
TD-66160 | SSLHandshakeException when accessing Databricks table |
TD-66025 | Glue connection not working on after upgrade to Release 8.2 |
TD-65696 | In Azure environment, changing the user output/upload directory only persists the path and not the container name/account storage. |
TD-65331 | Writing to ADLS failing in SSL Handshake to TLSv1.1 |
TD-65286 |
|
TD-65058 | Unable to upgrade due to migration failure |
TD-64627 |
|
TD-64528 | Upgrade to Release 8.2 failed to load dictionaries |
TD-64281 | /change-password page fails to load. |
TD-64171 | Cannot import parameterized datasets that include files with zero and non-zero byte sizes together. |
TD-63981 | Start/stop scripts should not modify any config/database settings during startup. |
TD-63867 | Jobs are not triggering for a parameterized datasets with zero-byte file sizes. |
TD-63493 | Unable to cancel a plan run |
TD-60881 | Incorrect path shown when using parameterized output path |
TD-59706 | No vertical scroll when there are too many connections on Import page |
TD-58576 | Cannot read property 'expandScriptLines' of undefined when flow node's activeSampleId is pointing to failed (null) sample. |
None.
August 13, 2021
Databricks:
:
None.
Ticket | Description | ||
---|---|---|---|
TD-62689 | Nginx returns Bad Request Status: 400 error, due to duplicate entries in
|
None.
June 11, 2021
Preferences:
API:
Connectivity:
Databricks:
Support for Databricks 7.3, using Spark 3.0.1.
NOTE: Databricks 5.5 LTS is scheduled for end of life in July 2021. An upgrade to Databricks 7.3 is recommended. |
NOTE: In this release, Spark 3.0.1 is supported for use with Databricks 7.3 only. |
Plan metadata references:
Use metadata values from other tasks and from the plan itself in your HTTP task definitions.
The Jobs tabs have been enhanced to display the list of latest and the previous jobs that have been executed for the selected output.
For more information, see View for Outputs.
Sample Jobs Page:
Install:
Support for Nginx 1.20.0 on the . See System Requirements.
NOTE: This required update applies only to customers who have modified their Java service classpaths to include |
In deployments on a Hadoop edge node, the classpath values for Java-based services may have been modified to include the following:
/etc/hadoop/conf |
As of this release, symlinks must be created to locations within the to replace the above path modifications.
NOTE: Before you before the following update, please create a backup of |
In the following example, all files in the etc/hadoop/conf
directory are updated with symlinks to the proper directory in the conf
directory of files.
for file in `ls /etc/hadoop/conf`; do ln -sf /etc/hadoop/conf/$file /opt/trifacta/conf/hadoop-site/$file; done |
Running Environment:
Cloudera 5.x, including Cloudera 5.16, is no longer supported. Please upgrade to a supported version of Cloudera 6.x.
Catalog integrations end of life:
The following catalog integrations are no longer available in the platform:
For more information, see End of Life and Deprecated Features.
API:
The following API endpoints are scheduled for deprecation in a future release:
NOTE: Please avoid using the following endpoints. |
/v4/connections/vendors /v4/connections/credentialTypes /v4/connections/:id/publish/info /v4/connections/:id/import/info |
These endpoints have little value for public use.
Ticket | Description |
---|---|
TD-59854 | Datetime column from Parquet file incorrectly inferred to the wrong data type on import. |
TD-59658 | IAM roles passed through SAML does not update after Hotfix upgrade |
TD-59633 | Enabled session tag feature but running into "The security token included in the request is invalid" error |
TD-59331 | When include quotes option is disabled on an output, Databricks still places quotes around empty values. |
TD-59128 | BOM characters at the beginning of a file causing multiple headers to appear in Transformer Page. |
TD-58932 | Cannot read file paths with colons from EMR Spark jobs |
TD-58694 | Very large number of files generated during Spark job execution |
TD-58523 | Cannot import dataset with filename in Korean alphabet from HDFS. |
Ticket | Description |
---|---|
TD-60701 | Most non-ASCII characters incorrectly represented in visual profile downloaded in PDF format. |
February 26, 2021
In-app messaging: Be sure to check out the new in-app messaging feature, which allows us to share new features and relevant content to |
Install:
Support for PostgreSQL 12.X for on all supported operating systems.
NOTE: Beginning in this release, the latest stable release of PostgreSQL 12 can be installed with the |
NOTE: Support for PostgreSQL 9.6 is deprecated for customer-managed Hadoop-based deployments and AWS deployments. PostgreSQL 9.6 is supported only for Azure deployments. When Azure supports PostgreSQL 12 or later, support for PostgreSQL 9.6 will be deprecated in the subsequent release of |
Security:
Databases:
New databases:
Connectivity:
Authorization:
Sharing:
Define permissions on individual objects when they are shared.
NOTE: Fine-grained sharing permissions apply to flows and connections only. |
For more information, see Changes to User Management.
Running environment:
Publishing:
You can replace an existing macro definition with a macro that you have exported to your local desktop.
NOTE: Before you replace the existing macro, you must export a macro to your local desktop. For more information, see Export Macro. |
For more information, see Macros Page.
Sample Jobs Page:
For more information, see Specify column headers during importYou can specify the column headers for your dataset during import. For more information, see Import Data Page .
Services:
NOTE: CDH 6.1 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix. |
NOTE: HDP 2.6 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix. |
Support for custom data types based on dictionary files to be deprecated:
NOTE: The ability to upload dictionary files and use their contents to define custom data types is scheduled for deprecation in a future release. This feature is limited and inflexible. Until an improved feature can be released, please consider using workarounds. For more information, see Validate Your Data. You can create custom data types using regular expressions. For more information, see Create Custom Data Types. |
Strong consistency management now provided by AWS S3:
Prior to this release, S3 sometimes did not accurately report the files that had been written to it, which resulted in consistency issues between the files that were written to disk and the files that were reported back to the .
As of this release, AWS has improved S3 with strong consistency checking, which removes the need for the product to maintain a manifest file containing the list of files that have been written to S3 during job execution.
NOTE: As of this release, the S3 job manifest file is no longer maintained. All configuration related to this feature has been removed from the product. No additional configuration is needed. |
For more information, see https://aws.amazon.com/s3/consistency/.
For more information on integration with S3, see Enable S3 Access.
Installation of database client is now required:
Before you install or upgrade the database or perform any required database cross-migrations, you must install the appropriate database client first.
NOTE: Use of the database client provided with each supported database distribution is now a required part of any installation or upgrade of the |
For more information:
Job logs collected asynchronously for Databricks jobs:
In prior releases, the reported that a job failed only after the job logs had been collected from the Databricks cluster. This log collection process could take a while to complete, and the job was reported as in progress when it had already failed.
Beginning in this release, collection of Databricks job logs for failed jobs happens asynchronously. Jobs are now reported in the as soon as they are known to have failed. Log collection happens in the background afterward.
Catalog integrations now deprecated:
Integrations between and Alation and Waterline services are now deprecated. For more information, see End of Life and Deprecated Features.
Ticket | Description | |
---|---|---|
TD-56170 | The Test Connection button for some relational connection types does not perform a test authentication of user credentials. | |
TD-54440 | Header sizes at intermediate nodes for JDBC queries cannot be larger than 16K. Previously, the column names for JDBC data sources were passed as part of a header in a GET request. For very wide datasets, these GET requests often exceeded 16K in size, which represented a security risk. The solution is to turn these GET requests into ingestion jobs.
|
Ticket | Description | |
---|---|---|
TD-58818 | Cannot run jobs on some builds HDP 2.6.5 and later. There is a known incompatibility between HDP 2.6.5.307-2 and later and the Hadoop bundle JARs that are shipped with the
| |
TD-58523 | Cannot import dataset with filename in Korean alphabet from HDFS.
| |
TD-55299 | Imported datasets with encodings other than UTF-8 and line delimiters other than | |
TD-51516 | Input data containing BOM (byte order mark) characters may cause Spark or |
January 26, 2021
APIs:
Connectivity:
Support for using OAuth2 authentication for Salesforce connections.
NOTE: Use of OAuth2 authentication requires additional configuration. For more information, see OAuth 2.0 for Salesforce. |
Recipe development:
Update Macros:
Job execution:
None.
Ticket | Description |
---|---|
TD-57354 | Cannot import data from Azure Databricks. This issue is caused by an incompatibility between TLS v1.3 and Java 8, to which it was backported. |
TD-57180 | AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing. |
Ticket | Description | ||
---|---|---|---|
TD-56170 | The Test Connection button for some relational connection types does not perform a test authentication of user credentials.
|
December 21, 2020
Tip: Check out the new in-app tours, which walk you through the steps of wrangling your datasets into clean, actionable data. |
Import:
For more information, see Export Plan.
For more information, see Import Plan.
Authentication:
Connectivity:
Improved Salesforce connection type.
For more information, see Create Salesforce Connections.
API:
Rebuild custom UDF JARs for Databricks clusters
Previously, UDF files were checked for consistency based upon the creation time of the JAR file. However, if the JAR file was passed between Databricks nodes in a high availability environment or between services in the platform, this timestamp could change, which could cause job failures due to checks on the created-at timestamps.
Beginning in this release, the platform now inserts a build-at timestamp into the custom UDF manifest file when the JAR is built. This value is fixed, regardless of the location of the copy of the JAR file.
NOTE: Custom UDF JARs that were created using earlier releases of the platform and deployed to a Databricks cluster must be rebuilt and redeployed as of this release. For more information on troubleshooting the error conditions, see Java UDFs. |
In prior releases of , integration with EMR required the deployment of a custom credential provider JAR file provided by the customer as part of the initial bootstrap of the EMR cluster. As of this release, this JAR file is no longer required. Instead, it is provided by the
directly.
NOTE: If your deployment of the |
For more information on integrating with EMR, see Configure for EMR.
On the , the version of nodeJS has been upgraded to nodeJS 14.15.4 LTS. For more information, see System Requirements.
When a dataset is loaded, the now reads in more data before the type inference system and row splitting transformations analyze the data to break it into rows and columns. This larger data size should result in better data inference in the system.
NOTE: Types and row splits on pre-existing datasets may be affected by this change. |
For more information, see Improvements to the Type System.
Ticket | Description |
---|---|
TD-54742 | Access to S3 is disabled after upgrade. |
TD-53527 | When importing a dataset via API that is sourced from a BZIP file stored on S3, the columns may not be properly split when the platform is permitted to detect the structure. |
Ticket | Description | |
---|---|---|
TD-57180 | AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.
| |
TD-56830 | Receive
|
November 16, 2020
Plan View:
Transform Builder:
Manage Users section has been deprecated:
In previous releases, user management functions were available through the Manage Users section of the Admin Settings page. These functions have been migrated to the Workspace Settings page, where all of the previous functions are now available. The Manage Users section has been deprecated.
Better license management:
In prior releases, the locked out all users if the number of active users exceeded the number permitted by the license. This situation could occur if users were being added via API, for example.
Beginning in this release, the does not block access when the number of licensed users is exceeded.
NOTE: If you see the notification banner about license key violations, please adjust your users until the banner is removed. If you need to adjust the number of users associated with your license key, please contact |
For more information, see License Key.
jobs now use ingestion for relational sources:
When a job is run on , any relational data sources are ingested into the backend datastore as a preliminary step during sampling or transformation execution. This change aligns
job execution with future improvements to the overall job execution framework. No additional configuration is required.
Tip: Jobs that are executed on the |
For more information on ingestion, see Configure JDBC Ingestion.Job results page changes:
Ticket | Description |
---|---|
TD-55125 | Cannot copy flow. However, export and import of the flow enables copying. |
TD-53475 | Missing associated artifact error when importing a flow. |
None.
October 19, 2020
Plans:
Create HTTP tasks for your plans, which can be configured to issue a request to an API endpoint over HTTP.
Publishing:
IAM support for Redshift connections.
NOTE: To enable use of an existing IAM role for Redshift, additional permissions must be added. For more information, see Required AWS Account Permissions. |
For more information, see Create Redshift Connections.
JDBC connection pooling disabled:
NOTE: The ability to create connection pools for JDBC-based connections has been disabled. Although it can be re-enabled if necessary, it is likely to be removed in a future release. For more information, see Changes to Configuration. |
TDE format has been deprecated:
Tableau Server has deprecated support for the TDE file format. As of this release, all outputs and publications to Tableau Server must be generated using HYPER, the replacement format for TDE.
For more information, see Tableau Hyper Data Type Conversions.
Enhanced Flow and Flow View menu options :
The context menu options for Flow View and Flow have been renamed and reorganized for a better user experience.
None.
Ticket | Description | |
---|---|---|
TD-54030 | When creating custom datasets from Snowflake, columns containing time zone data are rendered as null values in visual profiles, and publishing back to Snowflake fails.
|
September 21. 2020
Flow View:
As a part of collaborative suggestions enhancement, the support for Parameter History panel is deprecated from the software. For more information on collaborative suggestions feature, see Overview of Predictive Transformation .
In Release 7.6, an improved version of Flow View was released. At the time of release, users could switch back to using the classic version.
Beginning in this release, the classic version of Flow View is no longer available.
Tip: The objects in your flows that were created in classic Flow View may be misaligned in the new version of Flow View. You can use auto-arrange to re-align your flow objects. |
For more information, see Flow View Page.
Ticket | Description |
---|---|
TD-53318 | Cannot publish results to relational targets when flow name or output filename or table name contains a hyphen (e.g. my - filename.csv). |
None.