Release 8.2.2

March 25, 2022

What's New


Changes in System Behavior


Improvements to publishing of to Snowflake. For more information, see Improvements to the Type System.




Key Bug Fixes




Vulnerability scan detected compromised versions of log4j on the Trifacta Hadoop dependency jars


Job fails using Spark when using parameterized files as input


Patch httpd to version 2.4.52


unavailable due to update lock on plantasksnapshotruns


Remove log4j dependencies from Java projects


CVE-2021-44832: Apache Log4j2 vulnerable to RCE via JDBC Appender when attacker controls configuration


EMR spark job fails with error "org.apache.spark.sql.AnalysisException: Cannot resolve column name" if flow optimizations are enabled.


Intermittent failure to publish to Tableau in Fileconverter.


EMR spark job fails with error "org.apache.spark.sql.AnalysisException: Cannot resolve column name"


CVE-2021-45105: Log4j vulnerability (denial of service)


Glue jobs not working after upgrade to Release 8.2


CVE-2021-45046: Log4j vulnerability


CVE-2021-23017: Nginx v.1.20.0 security vulnerability


Nest function failing


Patch/update Log4J (RCE 0-day exploit found in log4j)


Publish failing with " No FileSystem for scheme: sftp"


Output home directory is not picked correctly for job runs in wasb/adls-gen2


SSLHandshakeException when accessing Databricks table


Glue connection not working on after upgrade to Release 8.2


In Azure environment, changing the user output/upload directory only persists the path and not the container name/account storage.


Writing to ADLS failing in SSL Handshake to TLSv1.1


fail at Transform stage with Optimizer Service exception


Unable to upgrade due to migration failure


failing due to concurrent DB transaction


Upgrade to Release 8.2 failed to load dictionaries


/change-password page fails to load.


Cannot import parameterized datasets that include files with zero and non-zero byte sizes together.


Start/stop scripts should not modify any config/database settings during startup.


Jobs are not triggering for a parameterized datasets with zero-byte file sizes.


Unable to cancel a plan run


Incorrect path shown when using parameterized output path


No vertical scroll when there are too many connections on Import page


Cannot read property 'expandScriptLines' of undefined when flow node's activeSampleId is pointing to failed (null) sample.

New Known Issues


Release 8.2.1

August 13, 2021

What's New



Changes in System Behavior


Key Bug Fixes


Nginx returns Bad Request Status: 400 error, due to duplicate entries in /etc/nginx/conf.d/trifacta.conf for:

proxy_set_header Host $host;

Tip: Workaround is to delete the second entry in the file manually.

New Known Issues


Release 8.2

June 11, 2021

What's New





Support for Databricks 7.3, using Spark 3.0.1.

NOTE: Databricks 5.5 LTS is scheduled for end of life in July 2021. An upgrade to Databricks 7.3 is recommended.

NOTE: In this release, Spark 3.0.1 is supported for use with Databricks 7.3 only.

Plan metadata references:

Use metadata values from other tasks and from the plan itself in your HTTP task definitions.

Improved accessibility of job results:

The Jobs tabs have been enhanced to display the list of latest and the previous jobs that have been executed for the selected output.

For more information, see View for Outputs.

Sample Jobs Page:

You can monitor the status of all sample jobs that you have generated. Project administrators can access all sample jobs in the workspace. For more information, see Sample Jobs Page.


Support for Nginx 1.20.0 on the . See System Requirements.

Changes in System Behavior

Java service classpath changes:

NOTE: This required update applies only to customers who have modified their Java service classpaths to include /etc/hadoop/conf.

In deployments on a Hadoop edge node, the classpath values for Java-based services may have been modified to include the following:


As of this release, symlinks must be created to locations within the to replace the above path modifications.

NOTE: Before you before the following update, please create a backup of /etc/hadoop/conf first.

In the following example, all files in the etc/hadoop/conf directory are updated with symlinks to the proper directory in the conf directory of files.

for file in `ls /etc/hadoop/conf`; do ln -sf /etc/hadoop/conf/$file /opt/trifacta/conf/hadoop-site/$file; done

Running Environment:

Cloudera 5.x, including Cloudera 5.16, is no longer supported. Please upgrade to a supported version of Cloudera 6.x.

Catalog integrations end of life:

The following catalog integrations are no longer available in the platform:

For more information, see End of Life and Deprecated Features.


The following API endpoints are scheduled for deprecation in a future release:

NOTE: Please avoid using the following endpoints.


These endpoints have little value for public use.

Key Fixes

TD-59854Datetime column from Parquet file incorrectly inferred to the wrong data type on import.
TD-59658IAM roles passed through SAML does not update after Hotfix upgrade
TD-59633Enabled session tag feature but running into "The security token included in the request is invalid" error
TD-59331When include quotes option is disabled on an output, Databricks still places quotes around empty values.
TD-59128BOM characters at the beginning of a file causing multiple headers to appear in Transformer Page.
TD-58932Cannot read file paths with colons from EMR Spark jobs
TD-58694Very large number of files generated during Spark job execution
TD-58523Cannot import dataset with filename in Korean alphabet from HDFS.

New Known Issues

TD-60701Most non-ASCII characters incorrectly represented in visual profile downloaded in PDF format.

Release 8.1

February 26, 2021

What's New

In-app messaging:  Be sure to check out the new in-app messaging feature, which allows us to share new features and relevant content to  users in your workspace. The user messaging feature can be disabled by workspace administrators if necessary. See Workspace Settings Page.








Running environment:


Macro updates:

You can replace an existing macro definition with a macro that you have exported to your local desktop.

NOTE: Before you replace the existing macro, you must export a macro to your local desktop. For more information, see Export Macro.

For more information, see Macros Page.

Sample Jobs Page:

You can monitor the status of all sample jobs that you have generated. Project administrators can access all sample jobs in the workspace. For more information, see Sample Jobs Page.

Specify column headers during import

You can specify the column headers for your dataset during import. For more information, see  Import Data Page .


Changes in System Behavior

NOTE: CDH 6.1 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix.

NOTE: HDP 2.6 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix.

Support for custom data types based on dictionary files to be deprecated:

NOTE: The ability to upload dictionary files and use their contents to define custom data types is scheduled for deprecation in a future release. This feature is limited and inflexible. Until an improved feature can be released, please consider using workarounds. For more information, see Validate Your Data.

You can create custom data types using regular expressions. For more information, see Create Custom Data Types.

Strong consistency management now provided by AWS S3:

Prior to this release, S3 sometimes did not accurately report the files that had been written to it, which resulted in consistency issues between the files that were written to disk and the files that were reported back to the .

As of this release, AWS has improved S3 with strong consistency checking, which removes the need for the product to maintain a manifest file containing the list of files that have been written to S3 during job execution.

NOTE: As of this release, the S3 job manifest file is no longer maintained. All configuration related to this feature has been removed from the product. No additional configuration is needed.

For more information, see

For more information on integration with S3, see Enable S3 Access.

Installation of database client is now required:

Before you install or upgrade the database or perform any required database cross-migrations, you must install the appropriate database client first.

NOTE: Use of the database client provided with each supported database distribution is now a required part of any installation or upgrade of the .

For more information: 

Job logs collected asynchronously for Databricks jobs:

In prior releases, the reported that a job failed only after the job logs had been collected from the Databricks cluster. This log collection process could take a while to complete, and the job was reported as in progress when it had already failed.

Beginning in this release, collection of Databricks job logs for failed jobs happens asynchronously. Jobs are now reported in the as soon as they are known to have failed. Log collection happens in the background afterward.

Catalog integrations now deprecated:

Integrations between and Alation and Waterline services are now deprecated. For more information, see End of Life and Deprecated Features.

Key Bug Fixes

TD-56170The Test Connection button for some relational connection types does not perform a test authentication of user credentials.

Header sizes at intermediate nodes for JDBC queries cannot be larger than 16K.

Previously, the column names for JDBC data sources were passed as part of a header in a GET request. For very wide datasets, these GET requests often exceeded 16K in size, which represented a security risk.

The solution is to turn these GET requests into ingestion jobs.

NOTE: To mitigate this issue, JDBC ingestion and JDBC long loading must be enabled in your environment. For more information, see Configure JDBC Ingestion.

New Known Issues


Cannot run jobs on some builds HDP 2.6.5 and later. There is a known incompatibility between HDP and later and the Hadoop bundle JARs that are shipped with the .

Solution: The solution is to use an earlier compatible version. For more information, see Configure for Hortonworks.


Cannot import dataset with filename in Korean alphabet from HDFS.

Workaround: You can upload files with Korean characters from your desktop. You can also add a 1 to the end of the file on HDFS, and it can then be imported.


Imported datasets with encodings other than UTF-8 and line delimiters other than \n may generate empty outputs on Spark or running environments.


Input data containing BOM (byte order mark) characters may cause Spark or running environments to read data improperly and/or generate invalid results.

Release 8.0

January 26, 2021

What's New




Recipe development:

Update Macros:

Job execution:

Changes in System Behavior


Key Bug Fixes


Cannot import data from Azure Databricks. This issue is caused by an incompatibility between TLS v1.3 and Java 8, to which it was backported.


AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.

New Known Issues


The Test Connection button for some relational connection types does not perform a test authentication of user credentials.

Workaround: Append the following to your Connect String Options:


This option forces the connection to validate user credentials as part of the connection. There may be a performance penalty when this option is used.

Release 7.10

December 21, 2020

What's New

Tip: Check out the new in-app tours, which walk you through the steps of wrangling your datasets into clean, actionable data.


Plan View:

For more information, see Export Plan.

For more information, see Import Plan.





Changes in System Behavior

Rebuild custom UDF JARs for Databricks clusters

Previously, UDF files were checked for consistency based upon the creation time of the JAR file. However, if the JAR file was passed between Databricks nodes in a high availability environment or between services in the platform, this timestamp could change, which could cause job failures due to checks on the created-at timestamps.

Beginning in this release, the platform now inserts a build-at timestamp into the custom UDF manifest file when the JAR is built. This value is fixed, regardless of the location of the copy of the JAR file.

NOTE: Custom UDF JARs that were created using earlier releases of the platform and deployed to a Databricks cluster must be rebuilt and redeployed as of this release. For more information on troubleshooting the error conditions, see Java UDFs.

Custom credential provider JAR no longer required for EMR access

In prior releases of , integration with EMR required the deployment of a custom credential provider JAR file provided by the customer as part of the initial bootstrap of the EMR cluster. As of this release, this JAR file is no longer required. Instead, it is provided by the directly.

NOTE: If your deployment of the integrates with AWS Glue, you must still provide and deploy a custom credentials JAR file. For more information, see Enable AWS Glue Access.

For more information on integrating with EMR, see Configure for EMR.

Upgrade nodeJS

On the , the version of nodeJS has been upgraded to nodeJS 14.15.4 LTS. For more information, see System Requirements.

Data type and row split inference utilize more data

When a dataset is loaded, the now reads in more data before the type inference system and row splitting transformations analyze the data to break it into rows and columns. This larger data size should result in better data inference in the system.

NOTE: Types and row splits on pre-existing datasets may be affected by this change.

For more information, see Improvements to the Type System.

Key Bug Fixes

TD-54742Access to S3 is disabled after upgrade.
TD-53527When importing a dataset via API that is sourced from a BZIP file stored on S3, the columns may not be properly split when the platform is permitted to detect the structure.

New Known Issues


AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.

Workaround: Run the job on the Spark running environment instead.


Receive malformed_query: enter a filter criterion when importing table from Salesforce.

NOTE: Some Salesforce tables require mandatory filters when they are queried. Mandatory filters are not currently supported for Salesforce connections.

Release 7.9

November 16, 2020

What's New

Plan View:

Transform Builder:

Changes in System Behavior

Manage Users section has been deprecated:

In previous releases, user management functions were available through the Manage Users section of the Admin Settings page. These functions have been migrated to the Workspace Settings page, where all of the previous functions are now available. The Manage Users section has been deprecated.

Better license management:

In prior releases, the  locked out all users if the number of active users exceeded the number permitted by the license. This situation could occur if users were being added via API, for example. 

Beginning in this release, the  does not block access when the number of licensed users is exceeded. 

NOTE: If you see the notification banner about license key violations, please adjust your users until the banner is removed. If you need to adjust the number of users associated with your license key, please contact .

For more information, see License Key.

 jobs now use ingestion for relational sources:

When a job is run on , any relational data sources are ingested into the backend datastore as a preliminary step during sampling or transformation execution. This change aligns  job execution with future improvements to the overall job execution framework. No additional configuration is required.

Tip: Jobs that are executed on the are executed in an embedded running environment, called . Quick Scan samples are automatically executed in .

For more information on ingestion, see Configure JDBC Ingestion.Job results page changes:

Key Bug Fixes

TD-55125Cannot copy flow. However, export and import of the flow enables copying.
TD-53475Missing associated artifact error when importing a flow.

New Known Issues


Release 7.8

October 19, 2020

What's New





Changes in System Behavior

JDBC connection pooling disabled:

NOTE: The ability to create connection pools for JDBC-based connections has been disabled. Although it can be re-enabled if necessary, it is likely to be removed in a future release. For more information, see Changes to Configuration.

TDE format has been deprecated:

Tableau Server has deprecated support for the TDE file format. As of this release, all outputs and publications to Tableau Server must be generated using HYPER, the replacement format for TDE. 

For more information, see Tableau Hyper Data Type Conversions.

Enhanced  Flow and Flow View menu options :

The context menu options for Flow View and Flow have been renamed and reorganized for a better user experience.

Key Bug Fixes


New Known Issues


When creating custom datasets from Snowflake, columns containing time zone data are rendered as null values in visual profiles, and publishing back to Snowflake fails.

Workaround: In your SELECT statement applied to a Snowflake database, references to time zone-based data must be wrapped in a function to convert it to UTC time zone. For more information, see Create Dataset with SQL.

Release 7.7

September 21. 2020

What's New

Flow View:

Changes in System Behavior

Deprecated Parameter History Panel Feature 

As a part of collaborative suggestions enhancement, the support for Parameter History panel is deprecated from the software. For more information on collaborative suggestions feature, see Overview of Predictive Transformation .

Classic Flow View no longer available

In Release 7.6, an improved version of Flow View was released. At the time of release, users could switch back to using the classic version. 

Beginning in this release, the classic version of Flow View is no longer available. 

Tip: The objects in your flows that were created in classic Flow View may be misaligned in the new version of Flow View. You can use auto-arrange to re-align your flow objects.

For more information, see Flow View Page.

Key Bug Fixes

Cannot publish results to relational targets when flow name or output filename or table name contains a hyphen (e.g. my - filename.csv).

New Known Issues