Release 6.0.2

This release addresses several bug fixes.

What's New

Changes to System Behavior

NOTE: As of Release 6.0, all new and existing customers must license, download, and install the latest version of the Tableau SDK onto the . For more information, see Create Tableau Server Connections.

Upload:

Documentation:

Key Bug Fixes

TicketDescription
TD-40471SAM auth: Logout functionality not working
TD-39318Spark job fails with parameterized datasets sourced from Parquet files
TD-39213Publishing to Hive table fails

New Known Issues

None.

Release 6.0.1

This release features support for several new Hadoop distributions and numerous bug fixes.

What's New

Connectivity:

Publishing:

API:

Changes to System Behavior

Photon

In the application and documentation, the following changes have been applied.

ReferenceDescriptionold Run Job Page termnew Run Job Page termDoc
HadoopSupported running environment on the Hadoop clusterRun on HadoopSparkConfigure for Spark
Photon running environment

Supported running environment on the

Trifacta ServerPhotonConfigure Photon Running Environment
Photon in-browser clientIn-browser web clientn/an/aConfigure Photon Client

Key Bug Fixes

TicketDescription
TD-39779

MySQL JARs must be downloaded by user.

NOTE: If you are installing the databases in MySQL, you must download a set of JARs and install them on the . For more information, see Install Databases for MySQL.

TD-39694Tricheck returns status code 200, but there is no response. It does not work through Admin Settings page.
TD-39455

HDI 3.6 is not compatible with Guava 26.

TD-39086

Hive ingest job fails on Microsoft Azure.

New Known Issues

TicketDescription
TD-40299Cloudera Navigator integration cannot locate the database name for JDBC sources on Hive.
TD-40348

When loading a recipe in imported flow that references an imported Excel dataset, Transformer page displays Input validation failed: (Cannot read property 'filter' of undefined) error, and the screen is blank. 

Workaround: In Flow View, select an output object, and run a job. Then, load the recipe in the Transformer page and generate a new sample. For more information, see Import Flow.

TD-39969

On import, some Parquet files cannot be previewed and result in a blank screen in the Transformer page.

Workaround: Parquet format supports row groups, which define the size of data chunks that can be ingested. If row group size is greater than 10 MB in a Parquet source, preview and initial sampling does not work. To workaround this issue, import the dataset and create a recipe for it. In the Transformer page, generate a new sample for it. For more information, see Parquet Data Type Conversions.

Release 6.0

This release of introduces key features around column management, including multi-select and copy and paste of columns and column values. A new Job Details page captures more detailed information about job execution and enables more detailed monitoring of in-progress jobs. Some relational connections now support publishing to connected databases. This is our largest release yet. Enjoy!

NOTE: This release also announces the deprecation of several features, versions, and supported extensions. Please be sure to review Changes to System Behavior below.

What's New

NOTE: The PNaCl client for Google Chrome has been replaced by the WebAssembly client. This new client is now the default in use by the platform and is deployed to all clients through the browser. Please verify that all users in your environment are on Google Chrome 68+. For more information, see Desktop Requirements.


NOTE: Beginning in this release, the requires a 64-bit version of Microsoft Windows. For more information, see Install Desktop Application.


Wrangling:

Jobs:

Connectivity:

Language:

Workspace:

Publishing:

Administration:

Supportability:

Authentication:

API:

Changes to System Behavior

NOTE: The requires NodeJS 10.13.0. See System Requirements.

Configuration:

To simplify configuration of the most common feature enablement settings, some settings have been migrated to the new Workspace Settings page. For more information, see Workspace Settings Page.

NOTE: Over subsequent releases, more settings will be migrated to the Workspace Settings page from the Admin Settings page and from . For more information, see Changes to Configuration.

See Platform Configuration Methods.

See Admin Settings Page.

API:

NOTE: In the next release of , the v3 version of the APIs will be removed from the product. These End of Life endpoints will no longer be available for interaction with the . You must migrate your usage to the v4 APIs. For more information, see Changes to the APIs.

CLI:

NOTE: The uses the v3 endpoints. In the next release of , the will reach its End of Life. These tools will no longer be provided with the software distribution at all. You must migrate your use of the CLI to use the v4 APIs.

Java 7:

NOTE: In the next release of , support for Java 7 will be end of life. The product will no longer be able to use Java 7 at all. Please upgrade to Java 8 on the and your Hadoop cluster.

Changes to release numbering system:

In Release 5.0 and earlier, each release of  was given a separate release number, each release incrementing that number. For example, the Release 4.x product line was numbered Release 4.0, Release 4.1, and Release 4.2.

In Release 5.1,  moved to a monthly milestone release process. Monthly milestones were given separate release numbers in the following format: Release 5.1m1, Release 5.1m2, Release 5.1m3, and Release 5.1m4. The fifth milestone was the generally available release for Release 5.1.

Beginning in this release, each monthly milestone receives a separate release number. For this release, milestones are: Release 5.6, Release 5.7, and Release 5.8. Release 5.9 is the generally available release for 

This change in numbering scheme does not affect the scope and frequency of  releases. 

Errata:

In prior releases, the product and documentation stated that the platform implemented a version of regular expressions based on Javascript syntax. This is incorrect.

The  implements a version of regular expressions based off of RE2 and PCRE regular expressions.

NOTE: This is not a change in behavior. Only the documentation has been changed.

Key Bug Fixes

TicketDescription
TD-36332Data grid can display wrong results if a sample is collected and dataset is unioned.
TD-36192Canceling a step in recipe panel can result in column menus disappearing in the data grid.
TD-35916Cannot logout via SSO
TD-35899A deployment user can see all deployments in the instance.
TD-35780Upgrade: Duplicate metadata in separate publications causes DB migration failure.

TD-35644

Extractpatterns with "HTTP Query strings" option doesn't work.
TD-35504
Cancel job throws 405 status code error. Clicking Yes repeatedly pops up Cancel Job dialog.
TD-35486Spark jobs fail on LCM function that uses negative numbers as inputs.
TD-35483

Differences in how WEEKNUM function is calculated in the and Spark running environments, due to the underlying frameworks on which the environments are created.

NOTE: and Spark jobs now behave consistently. Week 1 of the year is the week that contains January 1.

For more information, see Changes to the Language.

TD-35481

Upgrade Script is malformed due to SplitRows not having a Load parent transform.
TD-35177Login screen pops up repeatedly when access permission is denied for a connection. 
TD-27933

For multi-file imports lacking a newline in the final record of a file, this final record may be merged with the first one in the next file and then dropped in the running environment. 

 

New Known Issues

TicketDescription
TD-39513

Import of folder of Excel files as parameterized dataset only imports the first file, and sampling may fail.

Workaround: Import as separate datasets and union together.

TD-39455

HDI 3.6 is not compatible with Guava 26.

Workaround: HDI 3.6 supports Guava 14. The solution is to remove the Guava 26 file from the Data Service class path. For more information, see Troubleshooting in Configure for HDInsight.


TD-39092

$filepath and $sourcerownumber references are not supported for Parquet file inputs.

Workaround: Upload your Parquet files. Create an empty recipe and run a job to generate an output in a different file format, such as CSV or JSON. Use that output as a new dataset. See Build Sequence of Datasets.

For more information on these references, see Source Metadata References.

TD-39086

Hive ingest job fails on Microsoft Azure.

TD-39053

Cannot read datasets from Parquet files generated by Spark containing nested values.

Workaround: In the source for the job, change the data types of the affected columns to String and re-run the job on Spark.

TD-39052Signout using reverse proxy method of SSO is not working after upgrade.
TD-38869

Upload of Parquet files does not support nested values, which appear as null values in the Transformer page.

Workaround: Unnest the values before importing into the platform.

TD-37683

Send a copy does not create independent sets of recipes and datasets in new flow. If imported datasets are removed in the source flow, they disappear from the sent version.

Workaround: Create new versions of the imported datasets in the sent flow.

TD-36145

Spark running environment recognizes numeric values preceded by + as Integer or Decimal data type. running environment does not and types these values as strings.

TD-35867

v3 publishing API fails when publishing to alternate S3 buckets

Workaround: You can use the corresponding v4 API to perform these publication tasks. For more information on a workflow, see API Workflow - Manage Outputs.