Release 6.0.2

This release addresses several bug fixes.

What's New

Changes to System Behavior

NOTE: As of Release 6.0, all new and existing customers must license, download, and install the latest version of the Tableau SDK onto the . For more information, see Create Tableau Server Connections.



Key Bug Fixes

TD-40471SAM auth: Logout functionality not working
TD-39318Spark job fails with parameterized datasets sourced from Parquet files
TD-39213Publishing to Hive table fails

New Known Issues


Release 6.0.1

This release features support for several new Hadoop distributions and numerous bug fixes.

What's New



Changes to System Behavior

Key Bug Fixes


MySQL JARs must be downloaded by user.

NOTE: If you are installing the databases in MySQL, you must download a set of JARs and install them on the . For more information, see Install Databases for MySQL.

TD-39694Tricheck returns status code 200, but there is no response. It does not work through Admin Settings page.

HDI 3.6 is not compatible with Guava 26.


Hive ingest job fails on Microsoft Azure.

New Known Issues

TD-40299Cloudera Navigator integration cannot locate the database name for JDBC sources on Hive.

When loading a recipe in imported flow that references an imported Excel dataset, Transformer page displays Input validation failed: (Cannot read property 'filter' of undefined) error, and the screen is blank. 

Workaround: In Flow View, select an output object, and run a job. Then, load the recipe in the Transformer page and generate a new sample. For more information, see Import Flow.


On import, some Parquet files cannot be previewed and result in a blank screen in the Transformer page.

Workaround: Parquet format supports row groups, which define the size of data chunks that can be ingested. If row group size is greater than 10 MB in a Parquet source, preview and initial sampling does not work. To workaround this issue, import the dataset and create a recipe for it. In the Transformer page, generate a new sample for it. For more information, see Parquet Data Type Conversions.

Release 6.0

This release of introduces key features around column management, including multi-select and copy and paste of columns and column values. A new Job Details page captures more detailed information about job execution and enables more detailed monitoring of in-progress jobs. Some relational connections now support publishing to connected databases. This is our largest release yet. Enjoy!

NOTE: This release also announces the deprecation of several features, versions, and supported extensions. Please be sure to review Changes to System Behavior below.

What's New

NOTE: Beginning in this release, the requires a 64-bit version of Microsoft Windows. For more information, see Install for Wrangler Enterprise Application.










Changes to System Behavior

NOTE: The requires NodeJS 10.13.0. See System Requirements.


To simplify configuration of the most common feature enablement settings, some settings have been migrated to the new Workspace Admin page. For more information, see Workspace Admin Page.

NOTE: Over subsequent releases, more settings will be migrated to the Workspace Admin page from the Admin Settings page and from . For more information, see Changes to Configuration.

See Platform Configuration Methods.

See Admin Settings Page.Java 7:

NOTE: In the next release of , support for Java 7 will be end of life. The product will no longer be able to use Java 7 at all. Please upgrade to Java 8 on the and your Hadoop cluster.

Key Bug Fixes

TD-36332Data grid can display wrong results if a sample is collected and dataset is unioned.
TD-36192Canceling a step in recipe panel can result in column menus disappearing in the data grid.
TD-35916Cannot logout via SSO
TD-35899A deployment user can see all deployments in the instance.
TD-35780Upgrade: Duplicate metadata in separate publications causes DB migration failure.


Extractpatterns with "HTTP Query strings" option doesn't work.
Cancel job throws 405 status code error. Clicking Yes repeatedly pops up Cancel Job dialog.
TD-35486Spark jobs fail on LCM function that uses negative numbers as inputs.

Differences in how WEEKNUM function is calculated in the and Spark running environments, due to the underlying frameworks on which the environments are created.

NOTE: and Spark jobs now behave consistently. Week 1 of the year is the week that contains January 1.


Upgrade Script is malformed due to SplitRows not having a Load parent transform.
TD-35177Login screen pops up repeatedly when access permission is denied for a connection. 

For multi-file imports lacking a newline in the final record of a file, this final record may be merged with the first one in the next file and then dropped in the running environment. 


New Known Issues


Import of folder of Excel files as parameterized dataset only imports the first file, and sampling may fail.

Workaround: Import as separate datasets and union together.


HDI 3.6 is not compatible with Guava 26.


$filepath and $sourcerownumber references are not supported for Parquet file inputs.

Workaround: Upload your Parquet files. Create an empty recipe and run a job to generate an output in a different file format, such as CSV or JSON. Use that output as a new dataset. See Build Sequence of Datasets.

For more information on these references, see Source Metadata References.


Hive ingest job fails on Microsoft Azure.


Cannot read datasets from Parquet files generated by Spark containing nested values.

Workaround: In the source for the job, change the data types of the affected columns to String and re-run the job on Spark.

TD-39052Signout using reverse proxy method of SSO is not working after upgrade.

Upload of Parquet files does not support nested values, which appear as null values in the Transformer page.

Workaround: Unnest the values before importing into the platform.


Send a copy does not create independent sets of recipes and datasets in new flow. If imported datasets are removed in the source flow, they disappear from the sent version.

Workaround: Create new versions of the imported datasets in the sent flow.


Spark running environment recognizes numeric values preceded by + as Integer or Decimal data type. running environment does not and types these values as strings.


v3 publishing API fails when publishing to alternate S3 buckets

Workaround: You can use the corresponding v4 API to perform these publication tasks. For more information on a workflow, see API Workflow - Manage Outputs.