Release 3.2.1

This release contains numerous bug fixes and some interesting new features.

What's New

Transformer Page:


Admin, Install, & Config:

Changes to System Behavior

Changes to the Language:

Changes to the Command Line Interface:

Miscellaneous Changes:

Key Bug Fixes

TD-19404Split transform using at parameter values out of range of cell size generates an error in Pig.
TD-19150On Photon, unnest transform fails if pluck=true.
TD-19032Swapping rapidly between source datasets that have already been edited may cause a No samples found error.
TD-18933You cannot load a dataset that utilizes another dataset via join or union three levels deep.
TD-18268If you profile a wide column (one that contains many characters of data in each cell value), the machine learning service can crash.
TD-18093Changes to a dataset that generates new columns can break any downstream lookups that use the dataset.


New Known Issues


Publish to Redshift of single-file CSV or JSON files fails.

Workaround: Publish files to Redshift as multi-part files. See Run Job Page.


After upgrade, job card summaries in the Jobs page may fail to load for jobs executed in the pre-upgrade version with steps containing functions that have been renamed.

Workaround: You can re-run the job in the upgraded version. For more information on the renamed functions for Release 3.2.1, see Changes to the Language.


When publishing to S3, you cannot write to a single file in an append publishing action.

Workaround: You can change the publish action to recreate the object, replace the object, or save it as a multi-file output.


When switching between an append and a replace publishing action in the application, any selected compression selecting is lost. You cannot set this value again.

Workaround: Cancel the edit in progress. Re-edit the publishing action to apply the compression setting to the replace transform.


You cannot configure a publishing location to be a directory that does not already exist.

Workaround: Create the directory on the datastore outside of the . Verify that the appropriate user accounts have access to the directory.


User are permitted to select compressed formats for append publish action, which is not supported.

NOTE: For Release 3.2.1, the append publish action does not support the use of compression.


Job execution fails with java.lang.OutOfMemoryError: unable to create new native thread exception in job log.

Workaround: You can try to raise the soft and hard limit on number of processes available to the platform. For more information, see Miscellaneous Configuration.

TD-19678Transformer Page

Column browser does not recognize when you place a checkmark next to the last column in the list.

Workaround: You can move the column to another location and then select it.

TD-19384Transformer Page

Preview cards take a long time to load when selecting values from a Datetime column.

Workaround: For selection purposes, you can change the data type to String. Then, make your selections and build your transform steps before switching back to Datetime data type.

TD-18584Type System

settype transforms that do not include a specified Datetime formatting string and its variant fail on upgrade. In previous releases, this formatting was permitted, and the variant to apply was inferred.

Workaround: Please review the variant information in the transform. Then, remove the step and re-apply the Date formatting through the Type drop-down for the column. The required type information is applied.


Release 3.2

This release features the introduction of the following key features:

Details are below.

What's New

Object Model:

Transformer Page:

Admin, Install, & Config:

NOTE: The minimum system requirements for the have changed for this release. For more information, see System Requirements.


Command Line Interface:


Job Execution and Performance:



Changes to System Behavior

This section outlines changes to how the platform behaves that have resulted from features or bug fixes in Release 3.2.

Post-Upgrade Sampling

NOTE: Due to changes in system behavior, all existing random samples for a dataset are no longer available after upgrading to this release. For any upgraded dataset, the selected sample reverts to the default sample, the first N rows of the dataset. The number of rows in the sample depends on the number of columns, data density, and other factors.

When you load your dataset into the Transformer page for the first time:

Changes to

Key Bug Fixes

TD-18319Inconsistent results for DATEDIFF function across running environments. For more information, see Changes to the Language.
TD-16255windowfill function fails to fill over some empty cells.
TD-16086Job list drop-down fails to enable selection of correct jobs.
TD-16084Job cards display CLI Job source for jobs launched from the application.
TD-15609Column filtering only works if filtering value is entered in lowercase.

Attempt to publish to Cloudera Navigator for a job results in a DataNotFoundException.

TD-15330Pivot transform generates "Cannot read property 'primitive' of undefined" error.
TD-14541Names for private connections can collide with names of global connections, resulting in private connection unable to be edited by the owning user.
TD-14397Left or outer join against dataset with deduplicate as last script line fails in Pig execution.
TD-13162Join key selection screen and buttons are not accessible on a small desktop screen.

New Known Issues

TD-19150Transformer Page

On Photon, unnest transform fails if pluck=true.

Workaround: The pluck parameter forces the removal of the unnested values from the source. You may be able to use the replace transform on the source column to remove these values.

TD-19032Transformer Page

Swapping rapidly between source datasets that have already been edited may cause a No samples found error.

Workaround: Log out and log in again. Perform your dataset swap as needed.

TD-18933Transformer Page

You cannot load a dataset that utilizes another dataset via join or union three levels deep.

Example: three datasets ( Level1, Level2 , Level3 ) each integrate ref_dataset via join. You union Level1 and Level2. Then, when you try to union those two into Level3, you get an error.

Workaround: You can generate results for the lower-level datasets and then create a new wrangled dataset from these results. However, you no longer automatically inherit changes from the source dataset(s).

TD-18836Transformer Page

find function accepts negative values for the start index. These values are consumed but produce unexpected results.

Workaround: Use non-negative values as inputs.

TD-18746Transformer Page

When Photon is enabled, previews in the data grid may take up to 30 seconds to dismiss.

Workaround: This issue is related to the display of suggestion cards. Although it's not an ideal solution, you can experiment with disabling the display of preview cards in the data grid options menu. See Data Grid Panel .


Platform fails to start if for S3 access does not have the ListAllMyBuckets permission.

Workaround: Please verify that this user has the appropriate permissions.


In Release 3.1.2 and earlier, any datasource that has never been used to create a dataset is no longer available after upgrade.

Workaround: The assets remain untouched on the datastore where located. As long as the user has read permissions to the datastore area, the assets can be re-imported into the platform for Release 3.2 and later.

TD-18268Transformer Page

If you profile a wide column (one that contains many characters of data in each cell value), the machine learning service can crash.

Workaround: Restart the machine learning service. If visual profiling of the column is important, look to split the column into separate columns and then profile each one individually.

TD-18093Transformer Page - Tools

Changes to a dataset that generates new columns can break any downstream lookups that use the dataset.

Workaround: If the lookup breaks, you can recreate it.


Preview of Hive tables intermittently fails to show table data. When you click the Eye icon to preview Hive table data, you might see a spinner icon.

Workaround: To workaround, preview data on another Hive table. Then, preview the data on the first table again. If you do not have another table to preview, try previewing the Hive table three times, which might work.


Remove references to Zookeeper in the platform.

Workaround: As of Release 3.2, the no longer requires access to Zookeeper. However, removal of all references in the platform requires more work, which will be completed in a future release.

TD-17657Transform Builder

splitrows transform allows splitting even if required parameter on is set to an empty value.


Workaround: Make sure you specify a valid value for on.

TD-17333Transformer Page

Sorting on a Datetime column places 00:00:00 values at the bottom of the list when operating on the Javascript running environment.

Workaround: This issue does not appear in the Photon running environment or in jobs executed in Photon or Hadoop Pig. See Configure Photon Running Environment.

TD-16419Transform BuilderComparison functions added through Builder are changed to operators in recipe

Importing a directory of Avro files only imports the first file when the Photon running environment is enabled.

Workaround: You can try re-exporting from the source system in a different format or importing the data when the JavaScript-based running environment is enabled. For more information on how to re-enable, see Configure Photon Running Environment.

TD-14622Script Infrastructure

Python and Java UDFs accept inputs with zero parameters.

Workaround: Insert a dummy parameter as part of the input.

TD-14131Compilation/Executionsplitrows transform does not work after a backslash.

Platform cannot execute jobs on Pig that are sourced from S3, if OpenJDK is installed.

Workaround: Install Oracle JDK 1.8 before installing the Trifacta platform. See System Requirements.